entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
1
461k
http://arxiv.org/abs/2307.01827v1
20230704170949
Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
[ "Gon Buzaglo", "Niv Haim", "Gilad Yehudai", "Gal Vardi", "Yakir Oz", "Yaniv Nikankin", "Michal Irani" ]
cs.LG
[ "cs.LG" ]
=- 20 Critical dynamical behavior of the Ising model Nikolaos G. Fytas August 1, 2023 ============================================== Memorization of training data is an active research area, yet our understanding of the inner workings of neural networks is still in its infancy. Recently, <cit.> proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers, effectively demonstrating that a large portion of training samples are encoded in the parameters of such networks. In this work, we extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks. We derive a more general reconstruction scheme which is applicable to a wider range of loss functions such as regression losses. Moreover, we study the various factors that contribute to networks' susceptibility to such reconstruction schemes. Intriguingly, we observe that using weight decay during training increases reconstructability both in terms of quantity and quality. Additionally, we examine the influence of the number of neurons relative to the number of training samples on the reconstructability. § INTRODUCTION Neural networks are known to memorize training data despite their ability to generalize well to unseen test data <cit.>. This phenomenon was observed in both supervised settings <cit.> and in generative models <cit.>. These works shed an interesting light on generalization, memorization and explainability of neural networks, while also posing a potential privacy risk. Current reconstruction schemes from trained neural networks are still very limited and often rely on unrealistic assumptions, or operate within restricted settings. For instance, <cit.> propose a reconstruction scheme based on the assumption of having complete knowledge of the training set, except for a single sample. <cit.> suggest a scheme which operates under the NTK regime <cit.>, and assumes knowledge of the full set of parameters at initialization. Reconstruction schemes for unsupervised settings are specifically tailored for generative models and are not applicable for classifiers or other supervised tasks. Recently, <cit.> proposed a reconstruction scheme from feed-forward neural networks under logistic or exponential loss for binary classification tasks. Their scheme requires only knowledge of the trained parameters, and relies on theoretical results about the implicit bias of neural networks towards solutions of the maximum margin problem <cit.>. Namely, neural networks are biased toward KKT points of the max-margin problem (see thm:known KKT). By utilizing the set of conditions that KKT points satisfy, they devise a novel loss function that allows for reconstruction of actual training samples. They demonstrate reconstruction from models trained on common image datasets (CIFAR10 <cit.> and MNIST <cit.>). In this work, we expand the scope of neural networks for which we have evidence of successful sample memorization, by demonstrating sample reconstruction. Our contributions are as follows: * We extend the reconstruction scheme of <cit.> to a multiclass setting (<ref>). This extension utilizes the implicit bias result from <cit.> to multiclass training. We analyse the effects of the number of classes on reconstructability, and show that models become more susceptible to sample reconstruction as the number of classes increases. * We devise a reconstruction scheme that applies for general loss functions, assuming that the model is trained with weight decay. We demonstrate reconstruction from models trained on regression losses. * We investigate the effects of weight decay and show that for certain values, weight decay increases the vulnerability of models to sample reconstruction. Specifically, it allows us to reconstruct training samples from a convolutional network, while <cit.> only handled MLPs. * We analyse the intricate relation between the number of samples and the number of parameters in the trained model, and their effect on reconstrctability. We also demonstrate successful reconstruction from a model trained on 5,000 samples, surpassing previous results that focused on models trained on up to 1,000 samples. § RELATED WORKS Memorization and Samples Reconstruction. There is no consensus on the definition of the term “memorization” and different works study this from different perspectives. In ML theory, memorization usually refers to label (or, model's output) memorization <cit.>, namely, fitting the training set. Memorization in the input domain is harder to show, because in order to demonstrate its occurrence one has to reconstruct samples from the model. <cit.> demonstrated reconstruction of one training sample, assuming knowledge of all other training samples and <cit.> demonstrated reconstruction of a substantial portion of training samples from a neural network classifier. <cit.> extend their work to networks trained under the NTK regime <cit.> and explore the relationship to dataset distillation. Several works have also studied memorization and samples reconstruction in generative models like autoencoders <cit.>, large language models <cit.> and diffusion-based image generators <cit.>. Inverting Classifiers. Optimizing a model's input as to minimize a class score is the common approach for neural network visualization <cit.>. It usually involves using input regularization <cit.>, GAN prior <cit.> or knowledge of batch statistics <cit.>. <cit.> showed reconstruction of training samples using similar approach, however these methods are limited to classifiers trained with only a few samples per class. Reconstruction from a federated-learning setup <cit.> involve attacks that assume knowledge of training samples' gradients (see also <cit.> for a theoretical analysis). In this work we do not assume any knowledge on the training data and do not use any priors other than assuming bounded inputs. § PRELIMINARIES In this section, we provide an overview of the fundamental concepts and techniques required to understand the remainder of the paper, focusing on the fundamentals laid by <cit.> for reconstructing training samples from trained neural networks. Theoretical Framework. <cit.> builds on the theory of implicit bias of gradient descent. Neural networks are commonly trained using gradient methods, and when large enough, they are expected to fit the training data well. However, it is empirically known that these models converge to solutions that also generalize well to unseen data, despite the risk of overfitting. Several works pointed to this “implicit bias" of gradient methods as a possible explanation. <cit.> showed that linear classifiers trained with gradient descent on the logistic loss converge to the same solution as that of a hard-SVM, meaning that they maximize the margins. This result was later extended to non-linear and homogeneous neural networks by <cit.>: Let Φ(;·) be a homogeneous [A classifier Φ is homogeneous w.r.t to if there exists L∈ s.t. ∀ c∈,: ϕ(; c )=c^L ϕ(;)] ReLU neural network. Consider minimizing the logistic loss over a binary classification dataset {(_i,y_i)}_i=1^n using gradient flow. Assume that there exists time t_0 where the network classifies all the samples correctly. Then, gradient flow converges in direction to a first order stationary point (KKT point) of the following maximum-margin problem: min_1/2^2 s.t. ∀ i ∈ [n] y_i Φ(; _i) ≥ 1 . A KKT point of <ref> is characterized by the following set of conditions: ∀ j ∈ [p], _j - ∑_i=1^n λ_i ∇__j[ y_i Φ(; _i) ] =0  (stationarity) ∀ i ∈ [n],   y_i Φ(; _i) ≥ 1 (primal feasibility) ∀ i ∈ [n],  λ_i ≥ 0 (dual feasibility) ∀ i ∈ [n],  λ_i = 0  if  y_i Φ(; _i) ≠ 1 (complementary slackness) Reconstruction Algorithm. <cit.> demonstrated reconstructing samples from the training set of such classifiers by devising a reconstruction loss. Given a trained classifier Φ(;), they initialize a set of {( _i,y_i ) }_i=1^m and {λ_i }_i=1^m, and optimize _i,λ_i to minimize the following loss function: L = ‖ - ∑_i=1^m λ_i ∇__j[ y_i Φ(; _i) ] ‖_L_stationary + ∑_i=1^m max{ -λ, -λ_min}_L_λ + L_prior Where L_prior is simply bounding each pixel value at [-1,1] [Formally: L_prior=∑_i=1^m ∑_k=1^d max{max{_i,k-1,0 }, max{ -_i,k-1,0 }}.]. The number of training samples n is unknown. However, setting m>2n, where { y_i } are set in a balanced manner allows reconstructing samples with any label distribution. The _i's are initialized from the Gaussian distribution 𝒩(0, σ_x^2 𝕀), and λ_min,σ_x are hyperparameters. We note that the homogeneity condition from thm:known KKT is not necessarily a practical limitation of this reconstruction scheme, as already in <cit.> they show reconstructions from a non-homogeneous network. Analysing and Summarizing Results. The optimization in <ref> is executed k times (for different hyperparameters) and results in km outputs ({_i }_i=1^km) that we term candidates, as they are candidates to be reconstructed training samples. To quantify the success of the reconstruction process, each training sample is matched with its nearest-neighbour from the km candidates. The “quality” of reconstruction is then measured using SSIM <cit.> (see full details in <ref>). An important corollary of the set of KKT conditions <ref> is that the parameters of the trained model only depend on gradients of samples that are closest to the decision boundary, the so-called "margin-samples" (see end of section 3.2 in <cit.>). Therefore, a good visual summary for analysing reconstruction from a trained model is by plotting the reconstruction quality (SSIM) against the distance from the decision boundary (|Φ(_i)|). We also utilize such visualizations. Assessing the Quality of Reconstructed Samples. Determining whether a candidate is a correct match for some training sample is as hard as finding a good image similarity metric. No synthetic metric such as SSIM or L2 norm can perfectly align with human perception. Perceptual similarity metrics (e.g., LPIPS <cit.>) build on top of pre-trained classifiers trained on Imagenet <cit.>, and are not effective for the image resolution in this work (up to 32x32 pixels). We have observed heuristically that candidates with SSIM score higher than about 0.4 are indeed visually similar to their nearest neighbor training sample. Hence, in this work we say that a certain candidate is a "good reconstruction" if its SSIM score with its nearest neighbor is at least 0.4. Also see discussion in <ref>. § RECONSTRUCTING DATA FROM MULTI-CLASS CLASSIFIERS §.§ Theory The extension of the implicit bias of homogeneous neural networks to the multi-class settings is discussed in <cit.> (Appendix G): Let S = {(_i,y_i)}_i=1^n ⊆^d × [C] be a multi-class classification training set where C∈ℕ is any number of classes, and [C]= {1,…,C}. Let Φ(;·):^d →^C be a homogeneous neural network parameterized by ∈^p. We denote the j-th output of Φ on an input as Φ_j(;) ∈. Consider minimizing the standard cross-entropy loss and assume that after some number of iterations the model correctly classifies all the training examples. Then, gradient flow will converge to a KKT point of the following maximum-margin problem: min_1/2^2 s.t. Φ_y_i(; _i) - Φ_j(; _i) ≥ 1    ∀ i ∈ [n], ∀ j ∈ [C] ∖{y_i}  . A KKT point of the above optimization problem is characterized by the following set of conditions: - ∑_i=1^n ∑_jy_i^cλ_i,j∇_ ( Φ_y_i(; _i) - Φ_j(; _i) ) =   ∀ i ∈ [n], ∀ j ∈ [C] ∖{y_i}: Φ_y_i(; _i) - Φ_j(; _i) ≥ 1 ∀ i ∈ [n], ∀ j ∈ [C] ∖{y_i}: λ_i,j≥ 0 ∀ i ∈ [n], ∀ j ∈ [C] ∖{y_i}: λ_i,j= 0  if Φ_y_i(; _i) - Φ_j(; _i) ≠ 1 A straightforward extension of a reconstruction loss for a multi-class model that converged to the conditions above would be to minimize the norm of the left-hand-side (LHS) of condition <ref> (namely, optimize over {_i}_i=1^m and {λ_i,j}_i ∈ [n], j∈ [C] ∖ y_i where m is a hyperparameter). However, this straightforward extension failed to successfully reconstruct samples. We therefore propose the following equivalent formulation. Note that from <ref>, most λ_i,j zero out: the distance of a sample _i to its nearest decision boundary, , is usually achieved for a single class j and therefore (from <ref>) in this case at most one λ_i,j will be non-zero. For some samples _i it is also possible that all λ_i,j will vanish. Following this observation, we define the following loss that only considers the distance from the decision boundary: L_multiclass(_1,...,_m,λ_1,...,λ_m)= - ∑_i=1^mλ_i ∇_[Φ_y_i(_i;)-max_jy_iΦ_j(_i;)]_2^2 <ref> implicitly includes <ref> into the summation in <ref>, thereby significantly reducing the number of summands and simplifying the overall optimization problem. While the straightforward extension failed to successfully reconstruct samples, solving <ref> enables reconstruction from multiclass models (see <ref> and results below). We also use the same L_λ and L_prior as in <ref>, and set {y_i} in a balanced manner (uniformly on all classes). While setting m= C· n allows reconstructing any label distribution, in our experiments we focus on models trained on balanced training sets, and use m=2n which works sufficiently well. An intuitive way to understand the extension of the binary reconstruction loss <ref> to multi-class reconstruction <ref> is that the only difference is the definition of the distance to nearest boundary, which is the term inside the square brackets in both equations. §.§ Results We compare between reconstruction from binary classifiers, as studied in <cit.>, and reconstruction from multi-class classifiers by using the novel loss function <ref>. We conduct the following experiment: we train an MLP classifier with architecture on samples from the CIFAR10 <cit.> dataset. The model is trained to minimize the cross-entropy loss with full-batch gradient descent, once with two classes (250 samples per class) and once for the full 10 classes (50 samples per class). Both models train on the same amount of samples (500). The test set accuracy of the models is 77%/32% respectively, which is far from random (50%/10% resp.). See implementation details in <ref>. To quantify the quality of our reconstructed samples, for each sample in the original training set we search for its nearest neighbour in the reconstructed images and measure the similarity using SSIM <cit.> (higher SSIM means better reconstruction). In <ref> we plot the quality of reconstruction (in terms of SSIM) against the distance of the sample from the decision boundary Φ_y_i(_i;)-max_jy_iΦ_j(_i;). As seen, a multi-class classifier yields much more samples that are vulnerable to being reconstructed. We examine the relation between the ability to reconstruct from a model and the number of classes on which it was trained. Comparing between two models trained on different number of classes is not immediately clear, since we want to isolate the effect of the number of classes from the size of the dataset (it was observed by <cit.> that the number of reconstructed samples decreases as the total size of the training set increases). We therefore train models on training sets with varying number of classes (C ∈{2,3,4,5,10}) and varying number of samples per class (1,5,10,50). The results are visualized in <ref>a. As seen, for models with same number of samples per class, the ability to reconstruct increases with the number of classes, even though the total size of the training set is larger. This further validates our hypothesis that the more classes, the more samples are vulnerable to reconstruction (also see <ref>). Another way to validate this hypothesis is by showing the dependency between the number of classes and the number of “good” reconstructions (SSIM>0.4) – shown in <ref>b. As can be seen, training on multiple classes yields more samples that are vulnerable to reconstruction. An intuitive explanation, is that multi-class classifiers have more “margin-samples". Since margin-samples are more vulnerable to reconstruction, this results in more samples being reconstructed from the model. § DATA RECONSTRUCTION WITH GENERAL LOSS FUNCTIONS We demonstrate that data reconstruction can be generalized to a larger family of loss functions. <cit.> and <ref> only considered a reconstruction scheme based on the implicit bias of gradient methods trained with cross-entropy loss. For other loss functions, such as the square loss, a precise characterization of the implicit bias in nonlinear networks does not exist <cit.>. Hence, we establish a reconstruction scheme for networks trained with explicit regularization, i.e., with weight decay. We show that as long as the training involves a weight-decay term, we can derive a reconstruction objective that is very similar to the previous objectives in <ref>. §.§ Theory Let ℓ(Φ(_i;), y_i) be a loss function that gets as input the predicted output of the model Φ (parametrized by ) on an input sample _i, and its corresponding label y_i. The total regularized loss: ℒ = ∑_i=1^n ℓ(Φ(_i;), y_i) + λ_WD1/2‖‖^2     . Assuming convergence (∇_ℒ = 0), the parameters should satisfy the following : - ∑_i=1^n ℓ'_i ∇_Φ(_i;) = 0 where ℓ'_i=1/λ_WD∂ℓ(Φ(_i;),y_i)/∂Φ(_i;). This form (which is similar to the condition in <ref>), allows us to define a generalized reconstruction loss for models trained with a weight-decay term: L_rec(_1,...,_m,λ_1,...,λ_m) = ‖ - ∑_i=1^n λ_i ∇_Φ(_i;) ‖_2^2 As before, we also include the same L_prior as in <ref>. It is straightforward to see that L_rec is a generalization of the reconstruction loss in <ref> (y_i could be incorporated into the λ_i term). §.§ Results and Analysis We validate the above theoretical analysis by demonstrating reconstruction from models trained on other losses than the ones shown in <ref> and <cit.>. We use the same dataset as in the classification tasks – images from CIFAR10 dataset with binary labels of {-1,1}. The only difference is replacing the classification binary cross-entropy loss with regression losses (e.g., MSE). In classification tasks, we analyzed the results by plotting the reconstruction quality (SSIM) against the sample's distance from the decision boundary (see <ref>). This showed that reconstruction is only feasible for margin-samples. However, in regression tasks, margin and decision boundary lack specific meaning. We propose an alternative analysis approach – note that smaller distance from the margin results in higher loss for binary cross-entropy. Intuitively, margin-samples are the most challenging to classify (as reflected by the loss function). Therefore, for regression tasks, we analyze the results by plotting the reconstruction quality against the loss (per training sample). In <ref> we show results for reconstructions from models trained with MSE, L_2.5 loss (ℓ=|Φ(;) - y |^p for p=2,2.5 respectively) and Huber loss <cit.>. The reconstruction scheme in <ref> is the same for all cases, and is invariant to the loss used during training. <ref> highlights two important observations: first, the reconstruction scheme in <ref> succeeds in reconstructing large portions of the training set from models trained with regression losses, as noted from the high quality (SSIM) of the samples. Second, by plotting quality against the loss, one sees that “challenging” samples (with high loss) are easier to reconstruct. Also note that the analysis works for classification losses, namely BCE with or without weight-decay in <ref>). For more results see <ref>. § ON THE DIFFERENT FACTORS THAT AFFECT RECONSTRUCTABILITY Our goal is to gain a deeper understanding of the factors behind models' vulnerability to reconstruction schemes. In this section, we present several analyses that shed light on several important factors. §.§ The Role of Weight Decay in Data Reconstruction <cit.> assumed MLP models whose first fully-connected layer was initialized with small (non-standard) weights. Models with standard initialization (e.g., <cit.>) did not yield reconstructed samples. In contrast, the MLPs reconstructed in <cit.> were initialized with an extremely small variance in the first layer. Set to better understand this drawback, we observed that incorporating weight-decay during training, not only enabled samples reconstruction in models with standard initialization, but often increase the reconstructability of training samples. [] < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > (a) (b) (c) figureUsing weight-decay during training increases vulnerability to sample reconstruction In <ref>a-b we show the number of good reconstructions for different choices of the value of the weight decay (λ_WD), for MLP classifiers trained on C=2,10 classes and 50 samples per class (<ref> a, b resp.). We add two baselines trained without weight-decay: model trained with standard initialization (black) and model with small-initialized first-layer (red). See how for some values of weight-decay, the reconstructability is significantly higher than what was observed for models with non-standard initialization. By examining the training samples' distance to the boundary, one observes that using weight-decay results in more margin-samples which are empirically more vulnerable to reconstruction (see full details in <ref>). Reconstruction from Convolutional Neural Networks (CNNs). CNNs adhere to the assumptions of <ref>, yet <cit.> failed to apply their reconstruction scheme <ref> to CNNs. We observe that incorporating weight-decay during training (using standard initialization) enables samples reconstruction. In <ref> we show an example for reconstruction from a binary classifier whose first layer is a Conv-layer with kernel size 3 and 32 output channels, followed by two fully connected layers (Conv(k=3,C_out=32)-1000-1). The weight-decay term is λ_WD=0.001 (the training setup is similar to that of MLP). In <ref>c we show the reconstructability for the same CNN model trained with other values of λ_WD. Note how the weight-decay term plays similar role in the CNN as in the MLP case. See full details in <ref>. §.§ The Effect of the Number of Parameters and Samples on Reconstructability <cit.> observed that models trained on fewer samples are more susceptible to reconstruction in terms of both quantity and quality. In this section, we delve deeper into this phenomenon, focusing on the intricate relationship between the number of parameters in the trained model and the number of training samples. We conduct the following experiment: We train 3-layer MLPs with architecture D-W-W-1 on N training samples from binary CIFAR10 (animals vs. vehicles), where W∈{5, 10, 50, 100, 500, 1000} and N∈{10, 50, 100, 300, 500}. We conduct the experiment for both classification and regression losses, with BCE and MSE loss respectively. Generalization error is 23%-31% for BCE (classification) and 0.69-0.88 for MSE (regression), compared to 50%/0.97 for similar models with random weights. We reconstruct each model using <ref> and record the number of good reconstructions. The results are shown in <ref>. Note that as W/N increases, our reconstruction scheme is capable of reconstructing more samples, and vice versa. For example, consider the case when N=10. To successfully reconstruct the entire training set, it is sufficient for W to be greater than 50/10 (for MSE/BCE). However, when N=500, even larger models (with larger W) can only reconstruct up to 8% of the samples. Lastly, we reconstruct from a model with W=10,000, trained on N=5,000 samples (5 times larger than any previous model). While there is some degradation in the quality of the reconstructions compared to models trained on fewer samples, it is evident that our scheme can still reconstruct some of the training samples. For full results see <ref>. § CONCLUSIONS AND FUTURE WORK We present improved reconstruction methods and conduct a comprehensive analysis of the reconstruction method proposed by <cit.>. Particularly, we extend their reconstruction scheme to a multi-class setting and devise a novel reconstruction scheme for general loss functions, allowing reconstruction in a regression setting (e.g., MSE loss). We examine various factors influencing reconstructability. We shed light on the role of weight decay in samples memorization, allowing for sample reconstruction from convolutional neural networks. Lastly, we examine the intricate relationship between the number of parameters, the number of samples, and the vulnerability of the model to reconstruction schemes. We acknowledge that our reconstruction method raises concerns regarding privacy. We consider it crucial to present such methodologies as they encourage researchers to study the potential hazards associated with training neural networks. Additionally, it allows for the development of protective measures aimed at preventing the leakage of sensitive information. All of the above extend our knowledge and understanding of how memorization works in certain neural networks. This opens up several possibilities for future research including extending our reconstruction scheme to practical models (e.g., ResNets), exploring reconstruction from models trained on larger datasets or different data types (e.g., text, time-series, tabular data), analyzing the impact of optimization methods and architectural choices on reconstructability, and developing privacy schemes to protect vulnerable samples from reconstruction attacks. abbrvnat § ANALYZING AND VISUALIZING THE RESULTS OF THE RECONSTRUCTION OPTIMIZATION The analysis of the results of the various reconstruction losses <ref>, involve verifying and checking which of the training samples were reconstructed. In this section we provide further details on our method for analyzing the reconstruction results, and how we measure the quality of our reconstructions. §.§ Analyzing the Results of the Reconstruction Optimization In order to match between samples from the training set and the outputs of the reconstruction algorithm (the so-called "candidates") we follow the same protocol of <cit.>. Note that before training our models, we subtract the mean image from the given training set. Therefore the training samples are d-dimensional objects where each entry is in [-1,1]. First, for each training sample we compute the distance to all the candidates using a normalized L_2 score: d(,) = ‖-μ_/σ_ - -μ_/σ_‖_2^2 Where ,∈^d are a training sample or an output candidate from the reconstruction algorithm, μ_ = 1/d∑_i=1^d (i) is the mean of and σ_ = √(1/d-1∑_i=1^d ((i)-μ_)^2) is the standard deviation of (and the same goes for ,μ_,σ_). Second, for each training sample, we take C candidates with the smallest distance according to  <ref>. C is determined by finding the first candidate whose distance is larger than B times the distance to the closest nearest neighbour (where B is a hyperparameter). Namely, for a training sample , the nearest neighbour is _1 with a distance d(,_1), then C is determined by finding a candidate _C+1 whose distance is d(,_C+1)>B · d(,_1), and for all j≤ C, d(,_j) ≤ B · d(,_1). B was chosen heuristically to be B=1.1 for MLPs, and B=1.5 for convolutional models. The C candidates are then summed to create the reconstructed sample = 1/C∑_j=1^C _j. In general, we can also take only C=1 candidate, namely just one nearest neighbour per training sample, but choosing more candidates improve the visual quality of the reconstructed samples. Third, the reconstructed sample is scaled to an image in [0,1] by adding the training set mean and linearly "stretching" the minimal and maximal values of the result to [0,1]. Finally, we compute the SSIM between the training sample and the reconstructed sample to measure the quality of reconstruction. §.§ Deciding whether a Reconstruction is “Good” Here we justify our selection for SSIM=0.4 as the threshold for what we consider as a “good" reconstruction. In general, the problem of deciding whether a reconstruction is the correct match to a given sample, or whether a reconstruction is a “good" reconstruction is equivalent to the problem of comparing between images. No “synthetic" metric (like SSIM, l2 etc.) will be aligned with human perception. A common metric for this purpose is LPIPS <cit.> that uses a classifier trained on Imagenet <cit.>, but since CIFAR images are much smaller than Imagenet images (32× 32 vs. 224× 224) it is not clear that this metric will be better than SSIM. As a simple rule of thumb, we use SSIM>0.4 for deciding that a given reconstruction is “good". To justify, we plot the best reconstructions (in terms of SSIM) in <ref>. Note that almost all samples with SSIM>0.4 are also visually similar (for a human). Also note that some of the samples with SSIM<0.4 are visually similar, so in this sense we are “missing" some good reconstructions. In general, determining whether a candidate output of a reconstruction algorithm is a match to a training sample is an open question and a problem in all other works for data reconstruction, see for example <cit.> that derived a heuristic for reconstructed samples from a generative model. This cannot be dealt in the scope of this paper, and is an interesting future direction for our work. § IMPLEMENTATION DETAILS Further Training Details. The models that were reconstructed in the main part of the paper were trained with learning rates of 0.01 for binary classifiers (both MLP and convolutional), and 0.5 in the case of multi-class classifier (<ref>). The models were trained with full batch gradient descent for 10^6 epochs, to guarantee convergence to a KKT point of <ref> or a local minima of <ref>. When small initialization of the first layer is used (e.g., in <ref>), the weights are initialized with a scale of 10^-4. We note that <cit.> observed that models trained with SGD can also be reconstructed. The experiment in <ref> (large models with many samples) also uses SGD and results with similar conclusion, that some models trained with SGD can be reconstructed. In general, exploring reconstruction from models trained with SGD is an interesting direction for future works. Runtime and Hardware. Runtime of a single reconstruction run (specific choice of hyperparameters) from a model D-1000-1000-1 takes about 20 minutes on a GPU Tesla V-100 32GB or NVIDIA Ampere Tesla A40 48GB. Hyperparameters of the Reconstruction Algorithm. Note that the reconstruction loss contains the derivative of a model with ReLU layers, which is flat and not-continuous. Thus, taking the derivative of the reconstruction loss results in a zero function. To address this issue we follow a solution presented in  <cit.>. Namely, given a trained model, we replace in the backward phase of backpropogation the ReLU function with the derivative of a softplus function (or SmoothReLU) f(x) = αlog(1 + e^-x), where α is a hyperparameter of the reconstruction scheme. The functionality of the model itself does not change, as in the foraward phase the function remains a ReLU. Only the backward function is replaced with a smoother version of the derivative of ReLU which is f'(x)=ασ(x) = α/1+e^-x (here σ is the Sigmoid function). To find good reconstructions we run the algorithm multiple times (typically 100 times) with random search over the hyperparameters (using the Weights & Biases framework <cit.>). The exact parameters for the hyperparameters search are: * Learning rate: log-uniform in [10^-5,1] * σ_x: log-uniform in [10^-6,0.1] * λ_min: uniform in [0.01,0.5] * α: uniform in [10,500] § MULTICLASS RECONSTRUCTION - MORE RESULTS §.§ Experiments with Different Number of Classes and Fixed Training Set Size To complete the experiment shown in <ref>, we also perform experiments on models trained on various number of classes (C ∈{2,3,4,5,10}) and with a fixed training set size of 500 samples (distributed equally between classes), see <ref>. It can be seen that as the number of classes increases, also does the number of good reconstructions, where for 10 classes there are more than 6 times good reconstructions than for 2 classes. Also, the quality of the reconstructions improves as the number of classes increase, which is depicted by an overall higher SSIM score. We also note, that the number of good reconstructions in <ref> is very similar to the number of good reconstructions from <ref> for 50 samples per class. We hypothesize that although the number of training samples increases, the number of "support vectors" (i.e samples on the margin which can be reconstructed) that are required for successfully interpolating the entire dataset does not change by much. §.§ Results on SVHN Dataset As shown in <ref>, our multiclass reconstruction scheme is not limited to CIFAR10 dataset, but can be extended to other datasets as well, specifically SVHN dataset <cit.>. The model whose reconstructions are shown in <ref> was trained on 50 samples per class (total of 10 classes) and the rest of the training hyperparameters are the same as that of its CIFAR10 equivalent (of 50 sample per class with 10 classes). § GENERAL LOSSES - MORE RESULTS Following the discussion in <ref> and <ref>, Figures <ref>, <ref>, <ref> present visualizations of training samples and their reconstructions from models trained with L_2, L_2.5 and Huber loss, respectively. § FURTHER ANALYSIS OF WEIGHT DECAY By looking at the exact distribution of reconstruction quality to the distance from the margin, we observe that weight-decay (for some values) results in more training samples being on the margin of the trained classifier, thus being more vulnerable to our reconstruction scheme. This observation is shown in <ref> where we show the scatter plots for all the experiments from <ref> (a). We also provide the train and test errors for each model. It seems that the test error does not change significantly. However, an interesting observation is that reconstruction is possible even for models with non-zero training errors, i.e. models that do not interpolate the data, for which the assumptions of <cit.> do not hold. [] < g r a p h i c s > figureScatter plots of the 12 experiments from <ref> (a). Each plot is model trained with a different value of weight decay on 2 classes with 50 samples in each class. Certain values of weight decay make the model more susceptible to our reconstruction scheme. § CONVOLUTIONAL NEURAL NETWORKS - ABLATIONS AND OBSERVATIONS In this section we provide more results and visualizations to the experiments on convolutional neural network in <ref>. In <ref> we show ablations for the choice of the kernel-size (k) and number of output channels (C_out) for models with architecture Conv(kernel-size=k,output-channels=C_out)-1000-1. All models were trained on 500 images (250 images per class) from the CIFAR10 dataset, with weight-decay term λ_WD=0.001. As can be seen, for such convolutional models we are able to reconstruct samples for a wide range of choices. Note that the full summary of reconstruction quality versus the distance from the decision boundary for the model whose reconstucted samples are shown in <ref>, is shown in <ref> for kernel-size 3 (first row) and number of output channels 32 (third column). Further analysis of <ref>. As expected for models with less parameters, the reconstructability decreases as the number of output channels decrease. An interesting phenomenon is observed for varying the kernel size: for a fixed number of output channel, as the kernel size increases, the susceptibility of the model to our reconstruction scheme decreases. However, as the kernel size approaches 32 (the full resolution of the input image), the reconstructability increases once again. On the one hand it is expected, since for kernel-size=32 the model is essentially an MLP, albeit with smaller hidden dimension than usual (at most 64 here, whereas the typical model used in the paper had 1000). On the other hand, it is not clear why for some intermediate values of kernel size (in between 3 and 32) the reconstructability decreases dramatically (for many models there are no reconstructed samples at all). This observation is an interesting research direction for future works. Visualizing Kernels. In <cit.>, it was shown that some of the training samples can be found in the first layer of the trained MLPs, by reshaping and visualizing the weights of the first fully-connected layer. As opposed to MLPs, in the case of a model whose first layer is a convolution layer, this is not possible. For completeness, in <ref> we visualize all 32 kernels of the Conv layer. Obviously, full images of shape 3x32x32 cannot be found in kernels of shape 3x3x3, which makes reconstruction from such models (with convolution first layer) even more interesting. § RECONSTRUCTION FROM A LARGER NUMBER OF SAMPLES One of the major limitations of <cit.> is that they reconstruct from models that trained on a relatively small number of samples. Specifically, in their largest experiment, a model is trained with only 1,000 samples. Here we take a step further, and apply our reconstruction scheme for a model trained on 5,000 data samples. To this end, we trained a 3-layer MLP, where the number of neurons in each hidden layer is 10,000. Note that the size of the hidden layer is 10 times larger than in any other model we used. Increasing the number of neurons seems to be one of the major reasons for which we are able to reconstruct from such large datasets, although we believe it could be done with smaller models, which we leave for future research. We used the CIFAR100 dataset, with 50 samples in each class, for a total of 5000 samples. In <ref>a we give the best reconstructions of the model. Note that although there is a degradation in the quality of the reconstruction w.r.t a model trained on less samples, it is still clear that our scheme can reconstruct some of the training samples to some extent. In <ref>b we show a scatter plot of the SSIM score w.r.t the distance from the boundary, similar to <ref>a. Although most of the samples are on or close to the margin, only a few dozens achieve an SSIM>0.4. This may indicate that there is a potential for much more images to reconstruct, and possibly with better quality.
http://arxiv.org/abs/2307.03037v1
20230706150333
Invariants in divided power algebras
[ "Rudolf Tange" ]
math.RT
[ "math.RT" ]
Let k be an algebraically closed field of characteristic p>0, let G=_n be the general linear group over k, let =_n be its Lie algebra and let D_s be subalgebra of the divided power algebra of ^* spanned by the divided power monomials with exponents <p^s. We give a basis for the G-invariants in D_s up to degree n and show that these are also the -invariants. We define a certain natural restriction property and show that it doesn't hold when s>1. If s=1, then D_1 is isomorphic to the truncated coordinate ring of of dimension p^() and we conjecture that the restriction property holds and show that this leads to a conjectural spanning set for the invariants (in all degrees). We also give similar results for the divided power algebras of several matrices and of vectors and covectors, and show that in the second case the restriction property doesn't hold. The connection with representation theory is that, by a result of Friedlander and Parshall, there exists a G-equivariant isomorphism of filtered coalgebras from the hyper algebra of G to D which maps the restricted enveloping algebra of to D_1. R. Tange]Rudolf Tange School of Mathematics, University of Leeds, LS2 9JT, Leeds, UK [email protected] [2020]13A50, 16W22 Invariants in divided power algebras [ August 1, 2023 ==================================== § INTRODUCTION Let k be an algebraically closed field of characteristic p>0, let G=_n be the general linear group and let be its Lie algebra. For the representation theory of G and it is of interest to understand the centres U^[p]()^ and (G)^G of the restricted enveloping algebra U^[p]() and the hyper algebra or distribution algebra (G). In this paper we study their commutative analogues: the truncated symmetric algebra S()=S()/(x^p | x∈) and the divided power algebra D(). They are isomorphic to their noncommutative analogues as G-modules under the conjugation action. The connection with the representations of G and is described in more detail in Remark <ref>.3 (the hyper algebra of G and the restricted enveloping algebra of ) and Remarks <ref>.4 and 5 (the Schur algebra). To state our results it is more convenient to work with A_1()= S(^*) and D(^*). This is harmless, since ≅^* as G-modules. Initially we were interested in describing the invariants for the group and the Lie algebra in A_1() and its higher analogues A_s()=S(^*)/(f^p^s | f∈^*). It is easy to see that A_1()^G is bigger than the image of S(^*)^G (or S(^*)^): the top degree element (unique up to a scalar multiple) of A_1()^G is not in the image of S(^*)^G. It turned out to be more convenient to work with the dual versions D_s(^*) of the A_s(), inside the divided power algebra D(^*) where we have the divided power maps. Up to degree n it is easy to give a basis for the invariants in D(^*). In fact we can give three different bases, see Section <ref>. So the task is then to describe the invariants of the subalgebras D_s(^*) in terms of these bases. For one of the three aforementioned bases of D(^*) we obtain a basis of D_s(^*) by forming equivalence class sums for a certain equivalence relation on the basis, for the other two we obtain a basis of D_s(^*) by taking a suitable subset of the basis, see Theorem <ref>. We also consider the so-called “restriction property" for several families of algebras, see Section <ref>. Intuitively, when the restriction property holds one may expect a universal description of the invariants, independent of the rank n. When it doesn't hold the description of the invariants will depend on the rank. In all the classical cases (invariants in the coordinate rings of vectors and covectors and of several matrices) the restriction property holds, at least for the group. For the algebras D(^*) and D_s(^*) that we study, the restriction property almost never holds. We can only conjecture it for D_1(^*)=A_1(), see Conjecture <ref>. The paper is organised as follows. In Section <ref> we discuss some, mostly well-known, results about divided power algebras, truncated coordinate rings, polarisation and Z-forms, and multilinear invariants of several matrices that we will need later on. Section <ref> contains our main result which describes the G-invariants in the algebra D_s(^*): Theorem <ref>. To prove it, it is more convenient to first work with D_s(). Theorem <ref> is our main result for this algebra. Infinitesimal invariants are discussed in Proposition <ref>. In Remark <ref>.3 and 4 we show that the restriction property doesn't hold for the algebras A_s() when s≥2 and also not for the algebras D_s(^*) when s≥2. In Section <ref> we give dimensions for the invariants in the graded pieces of some of the A_s(). In Section <ref> we study the divided power algebra and its “truncated" subalgebras for several matrices. Theorem <ref> describes the G-invariants and Proposition <ref> describes the infinitesimal invariants. To state and prove these results we first need to state some, mostly well-known, results about conjugacy classes in the symmetric group for a Young subgroup, partial polarisation, and invariants in the full divided power algebra. In Section <ref> we study the divided power algebra and its truncated subalgebras for vectors and covectors. Proposition <ref>(i) describes the G-invariants and Proposition <ref>(ii) describes the infinitesimal invariants. As preliminaries we first state some, mostly well-known, results about partial polarisation, and orbits in the symmetric group for the multiplication action of a product of two Young subgroups. In Remark <ref>.3 we show that the restriction property doesn't hold in this case. § PRELIMINARIES Throughout this paper k is an algebraically closed field of characteristic p>0 and s is an integer ≥1. §.§ The divided power algebra and certain subalgebras Let V=k V_ Z be a vector space over k with a Z-form with basis (y_1,…,y_m). We will denote 1 y_i∈ V just by y_i. Inside the symmetric algebra S(V_ Q) of V_ Q= Q_ ZV_ Z we can form the divided power monomials ∏_i=1^my_i^(t_i) where t_i≥0 and y_i^(t)=1/t!x^t. They are linearly independent over Q and their Z-span is a Z-subalgebra D(V_ Z) of S(V_ Q). Now we put D(V)=k_ QD(V_ Z). The algebra S(V_ Q) has the divided power map γ_i=(x↦ x^(i)):I_ Q→ S(V_ Q) where I_ Q consists of the polynomials without constant term. The γ_i, i≥1, preserve I_ Z=D(V_ Z)∩ I_ Q and therefore induce divided power maps γ_i:I_ Z→ D(V_ Z). The above γ_i preserve pI_ Z for i≥1 and, reducing mod p and extending from F_p= Z/p Z to k, we obtain divided power maps γ_i=(x↦ x^(i)):I→ D(V), where I=k I_ Z, which satisfy: * γ_0(x)=1, γ_1(x)=x and γ_i(x)∈ I for i≥ 1 and for all x∈ I, * γ_i(x+y)=∑_j=0^iγ_j(x)γ_i-j(y) for i≥ 0 and for x,y∈ I, * γ_i(xy)=x^iγ(y) for i≥ 0, x∈ D(V) and y∈ I, * γ_i(x)γ_j(x) = i+jiγ_i+j(x) for i,j≥0 and x∈ I, and * γ_i(γ_j(x)) = (ij)!/(i!)^j j!γ_ij(x) for i,j≥0 and x∈ I. The algebra D(V) is commutative and graded and has a (V)-action which is “defined over Z", so it is clear that the γ_i are (V)-equivariant. The span D_s(V) of the (divided power) monomials ∏_iy_i^(t_i), 0≤ t_i<p^s is a (V)-stable graded subalgebra of D(V) of dimension p^sm. It can be characterised as the distribution algebra or hyper algebra of the s-th Frobenius kernel V_a,s of the additive group scheme V_a, see <cit.>. We denote the graded pieces of degree r of D(V) and D_s(V) by D^r(V) and D_s^r(V). Let B=⊕_r≥2D^r_1(V). Then B is stable under multiplication and under the γ_i, i≥1. Clearly, B is stable under multiplication. Note that ∑_ia_ip^i∑_ib_ip^i is nonzero mod p if 0≤ b_i≤ a_i<p for all i by Lucas's Theorem, and p^i+1!/(p^i!)^p p! is nonzero mod p by Legendre's Theorem. So in view of (<ref>) and (<ref>) it is enough to show that B is stable under γ_p. Clearly B is stable under the γ_i, 1≤ i<p, so by (<ref>) it is enough to show that γ_p(u)=0 for any divided power monomial u in the y_i of degree j with 2≤ j<p. If u involves at least two variables, then this follows immediately from (<ref>). So assume u=y_i^(j) for some i. Then γ_p(u)= (jp)!/(j!)^p p! y_i^(jp) by (<ref>). By Legendre's Theorem the p-adic valuation of (jp)!/(j!)^p p! is jp-s_p(jp)/p-1 - (p(j-s_p(j))/p-1+1), where s_p(j) denotes the sum of the p-adic digits of j. Now s_p(jp)=s_p(j)=j, since j<p, so this p-adic valuation equals j-1 which is ≥ 1. So (jp)!/(j!)^p p!=0 p. In general the p-adic valuation from the proof of Lemma <ref> equals s_p(j)-1. So in general one can prove that B=⊕_r>p^s-1D^r_s(V) is stable under multiplication and under all divided powers γ_i, i≥ 1. Indeed if u=y_i^(j) for some j with p^s-1<j<p^s, then we have that s_p(j)≥ 2, so γ_p(x)=0. §.§ Truncated coordinate rings Define the ideal I_s of the coordinate ring A=A(V)=k[V]=S(V^*) of V by I_s=(f^p | f∈ V^*)=(x_i^p^s | 1≤ i≤ m), where (x_1,…,x_m) is the dual basis of (y_1,…,y_m). Put A_s=A_s(V)=k[V]/I_s. We call A_s(V) the s-th truncated coordinate ring of V. It is a commutative graded algebra of dimension p^sm and can be characterised as the coordinate ring of the aforementioned infinitesimal group scheme V_a,s. We denote the graded pieces of degree r of A(V) and A_s(V) by A^r(V) and A_s^r(V). There is a (V)-equivariant isomorphism of graded Hopf algebras D_s(V)≅ A_s(V)^*. It maps ∏_i=1^my_i^(t_i), 0≤ t_i<p^s, to the dual basis element of ∏_i=1^mx_i^t_i. Put A_s^r=A_s^r(V). The top degree of A_s is N=(p^s-1)m and A_s(N)=k∏_i=1^mx_i^p^s-1 is 1-dimensional. Since (V) acts through ^1-p on A_s(N), the multiplication defines an (V)-invariant pairing A_s^r× (A_s^N-r^p-1)→ k. This pairing is nondegenerate, so we obtain isomorphisms (A_s^r)^*≅ A_s^N-r^p-1 and D_s(V)≅ A_s(V)^*≅ A_s(V)^p-1 of (V)-modules. §.§ The polarisation map and Z-forms The polarisation map P : S^r(V^*)→ (S^rV)^* in degree r sends f∈ S^r(V^*) to the the multi-homogeneous component of degree (1,…,1) of the r-variable polynomial function (v_1,…,v_r)↦ f(v_1+⋯+v_r). Let rV denote the direct sum of r copies of V, let F:rV→ k be r-linear, and let f=(v↦ F(v,…,v))∈ S^r(V^*), then P(f) = ((v_1,…,v_r)↦∑_σ∈ S_rF(v_σ(1),…,v_σ(1))) , where S_r denotes the symmetric group or rank r. We extend P to a linear map from k[V]=S(V^*) to the graded dual S(V)^* gr of S(V) and this is an algebra homomorphism. The multiplication on this graded dual comes from the comultiplication on S(V), see <cit.>. Inside S(V^*_ Q) we have the “divided power Z-form" D(V_ Z^*). The polarisation map over Q maps this Z-form onto the standard Z-form of the graded dual of S(V_ Q). Reducing mod p we obtain an isomorphism from D=D(V^*) to S(V)^* gr. We now identify these two. Then D^r=D^r(V^*)=S^r(V)^*=((V^⊗ r)^*)^S_r: the space of symmetric r-linear functions rV→ k, and D_s^r=D_s^r(V^*) consists of the symmetric r-linear functions that vanish when p^s arguments are the same. Furthermore the polarisation map over k can now be identified with the map k[V]=S(V^*)→ D(V^*) given by inclusion of Z-forms. We note that for a symmetric r-linear function rV→ k to vanish when p^s arguments are the same it is enough to check that this holds for r-tuples of basis vectors. This follows from the fact that the S_p^s-stabiliser of a nonconstant map {1,…,p^s}→{1,…,t}, t any integer ≥2, is a proper Young subgroup of S_p^s, so the orbit of such a map has size divisible by p. We now return to the polarisation map in characteristic p. It follows easily from the definition that P has image D_1 and kernel I_1, so it induces a (V)-equivariant isomorphism A_1=A_1(V)∼→D_1(V^*)=D_1 of graded algebras. §.§ Adjoint invariants and symmetric functions From now on until the end of Section <ref> we specialise V to =_n=(k^n) with G=_n acting by conjugation. So D=D(^*) and A=A(). The symbol V may now denote another vector space. We work with the bases (E_ij)_1≤ i,j≤ n of with dual basis (x_ij)_1≤ i,j≤ n of ^*, where E_ij is the elementary matrix which is 1 in row i and column j and 0 elsewhere. Note that the trace form on is nondegenerate and gives an isomorphism ∼→^* of G-modules which maps E_ij to x_ji. Note also that the G-action factors through the ()-action, so we have isomorphisms of G-modules D_s^r≅ (A_s^r)^*≅ A_s^N-r, where N=(p^s-1)n^2 is the top degree. The G-invariants in D^r=((^⊗ r)^*)^S_r are the S_r-invariants in the space of G-invariants of (^⊗ r)^*. By “Schur-Weyl duality" <cit.>, the space of G-invariants of (^⊗ r)^* can be described as the image of the group algebra kS_r of the symmetric group S_r under the S_r-equivariant linear map π↦ f_π , where f_π(X_1,…,X_r)=∏_i=1^r(X_σ_i), π=σ_1⋯σ_s is the disjoint cycle form of π (including 1-cycles), (X_σ) def=(X_i_1⋯ X_i_t) for any cycle σ=(i_1,…,i_t), and the S_r-action on kS_r is by conjugation. This map is injective when r≤ n. If we work with ^⊗ r instead of the isomorphic module (^⊗ r)^*, then the map is given by π↦ E_π, where E_π=∑_i∈{1,…,n}^r⊗_l=1^rE_i_π(l)i_l.[Identifying ^⊗ r with ((k^n)^⊗ r), the action of S_r on tensor space is given by π↦ E_π^-1.] We make some observations about symmetric functions. For the basics we refer to <cit.>. For an integer i≥1 and X∈_n we define e_i(X)=(∧^iX), h_i(X)=(S^iX) and p_i(X)=(X^i). Clearly, the e_i,h_i,p_i can be considered as elements of k[] and therefore also as elements of D(), see Section <ref>. For a partition λ of r we define e_λ to be the product of the e_λ_i and we define h_λ and p_λ in the same way. Via the Chevalley Restriction Theorem (CRT) we can identify these functions with the equally named symmetric functions. Writing λ in the form λ=1^m_12^m_2⋯ we define z_λ=∏_i≥1i^m_im_i! and u_λ=∏_i≥1m_i!. Recall that z_λ is the order of the centraliser in S_r of a permutation of cycle type λ. We will call 1/z_λp_λ, 1/u_λh_λ and 1/u_λe_λ divided p_λ,h_λ and e_λ. For the divided e_λ's and h_λ's, λ a partition of r, it is clear that they can be considered as elements of (D^r)^G. We claim that the same is true for the divided p_λ's and that, for n≥ r, these three families form three bases of (D^r)^G. For the first claim we work over Q. By <cit.> the Z-span of the p_λ's is the same as that of the u_λ m_λ's, where the m_λ's are the monomial symmetric functions. Taking the “dual" lattices, i.e. everything that is integral on the lattice via the canonical form, we obtain that the Z-span of the divided p_λ's is the same as that of the divided h_λ's, see <cit.>. Applying the involution ω we see that Z-span of the divided p_λ's is also the same as that of the divided e_λ's, see <cit.>. To prove the second claim we return to the above S_r-equivariant linear map from kS_r onto the G-invariant multilinear functions of r matrices. It is injective when n≥ r. So in this case (D^r)^G is simply the image of the centre (kS_r)^S_r of kS_r. If π∈ S_r has cycle type λ, then p_λ=(X↦ f_π(X,…,X)), so, as an element of S^r(_ Q)^* via the polarisation map P, it is ∑_σ∈ S_rf_σπσ^-1. Therefore the sum of the conjugacy class [π] is mapped to divided p_λ. So the divided p_λ's form a basis and therefore the divided e_λ's and h_λ's as well. Take p=2. Put u=Divided p_21+divided p_1^3 = 1/2p_2p_1+1/6p_1^3∈ D(_ Z^*). Then u=(X↦1/2(X^2)(X)+1/6(X)^3) corresponds to the symmetric 3-linear function (X,Y,Z) ↦(XY)(Z) + (XZ)(Y) + (YZ)(X) + (X)(Y)(Z). In characteristic p, this function vanishes when 2 arguments are the same, so the reduction mod p of u belongs to D_1. When n=2 this function is nonzero (take e.g. X=E_12, Y=E_21, Z=E_11), but is zero on triples of diagonal 2×2-matrices. The same is true for any symmetric r-linear function r_2→ k, r>2, which vanishes when 2 arguments are the same. Similarly, Divided p_3 = 1/3p_3=((X,Y,Z) ↦(XYZ) + (YXZ)) vanishes in characteristic 2 when 2 arguments are the same. This function is clearly nonzero for n≥2, but is zero on triples of diagonal matrices for all n≥1. Note that e_4=e_2^(2) on the diagonal matrices for p=2 and any n≥4, but not on the n× n matrices. I think in general e_2^(p) is a restricted (i.e. coeffs <p) polynomial in the e_i which should be independent of n for n sufficiently big. For example, for p=3, we have e_2^(3) = -e_3^2+ e_2e_4 - e_1e_5 - e_6 for n≥ 6. §.§ The restriction properties Recall that for a -module V the subspace of -invariants in V is defined by V^={v∈ V | x· v=0 for all x∈}. If V is a commutative k-algebra on which acts by derivations, for example the differentiated action of an action of G by automorphisms, then any p-th power is a -invariant. We will occasionally indicate the dependence of our algebras A_s and D_s on the rank n with an extra left subscript n. The embedding X↦([ X 0; 0 0 ]):_n-1↪_n induces a _n-1-equivariant surjections _nA↠_n-1A and _nA_s↠_n-1A_s and therefore restriction maps (_nA_s)^_n → (_n-1A_s)^_n-1. (_nA_s)^_n → (_n-1A_s)^_n-1. We say that the algebras (_nA_s)_n≥1 have the restriction property for the group if the above maps (<ref>) are surjective for all n≥2. The infinitesimal restriction property, or restriction property for the Lie algebra, can be defined analogously using the maps (<ref>) and one can define similar restriction maps for the algebras _nA=k[_n], _nD and _nD_s. As is well known, e_1,…,e_n are algebraically independent and generate A^G. Clearly e_i for _n restricts to e_i for _n-1, so the algebras _nA, n≥1, have the restriction property for the group. Furthermore, by the Veldkamp theorem for A, A is generated by A^p and A^G, see <cit.> and the references there. So the algebras _nA, n≥1, also have the infinitesimal restriction property. 1. Although the S_r-invariants of kS_r, i.e. the centre of kS_r, do in general (r>n) not surject onto the S_r-invariants in ((^⊗ r)^*)^G, it seems that this image does contain the symmetric G-invariant multilinear functions of r matrices which vanish when p arguments are the same. The first statement is equivalent to the statement that the algebras _nD don't have the restriction property, see Remark <ref>.4. The second statement is implied by Conjecture <ref>. 2. From our discussion of (D^r)^G we get an isomorphism from (kS_r)^S_r to the projective limit lim_⟵n(S^r(_n)^*)^_n. This map is a characteristic p version the “characteristic map" from <cit.>. 3. The map f↦(X↦ f(X-I)):k[]→ k[G], I the identity matrix, induces G-equivariant filtration preserving algebra isomorphisms A_s∼→ k[G_s], s≥1. Here the filtrations are given by the powers of the maximal ideals of 0 resp. I. Taking duals we obtain G-equivariant filtration preserving coalgebra isomorphisms D_s≅ D_s()∼→(G_s), s≥1, where (G_s) is the distribution or hyper algebra of the s-th Frobenius kernel G_s of G. These fit together to give a G-equivariant filtration preserving coalgebra isomorphism D≅ D()∼→(G) of which the associated graded is a G-equivariant isomorphism of Hopf algebras. All this holds in much bigger generality, see <cit.>. We note that (G_1) is isomorphic to the restricted enveloping algebra U^[p]() of . 4. The Schur algebra S(n,r) is isomorphic to D^r as G× G-module, so the centre of S(n,r) is isomorphic to (D^r)^G as vector spaces. Computer calculations suggest that (D^r)^G has dimension equal to the number of partitions of r of length ≤ n, independent of p, and that a spanning set can be obtained by dividing each h_λ, λ a partition of r, by the biggest possible integer in the D(_n, Z^*) and then reducing mod p. Example. The polynomial (e_1e_2^2)/4 is for n=2 in the divided power Z-form, but not for n≥3. 5. Since ≅^* we have A()≅ A(^*). So there is a pairing between D^r and A^r for which D_s^r is the orthogonal of I_s and the subspace of S(n,r) corresponding to D_s^r is a two-sided ideal of S(n,r), since I_s is a subcoalgebra of A^r (for the co-matrix multiplication). This is no surprise, since this orthogonal is stable under the G× G action and stability under the left/right multiplication action of the Schur algebra is the same as stability under the left/right multiplication action of G. The whole ideal of k[_n] generated by the p^s-th powers is a coideal (for the co-matrix addition), so its orthogonal is a subalgebra of the graded dual of S(). § THE ALGEBRAS D_S AND D_S() §.§ Group invariants Call a partition s-reduced if it has <p^s ones. To any partition we can associate an s-reduced partition by repeatedly replacing p^s occurrences of 1 by p^s-1 occurrences of p. We will call two partitions s-equivalent if their associated s-reduced partitions are the same. Call two elements of the symmetric group S_r s-equivalent if their cycle types are s-equivalent. Recall the definition of E_π, π∈ S_r, from Section <ref>. The sums of the E_π over the s-equivalence classes occur in D_s()^G, and when n≥ r they form a basis of D_s^r()^G. As we have seen in Section <ref>, the E_π span the G-invariants in ^⊗ r and they form a basis when n≥ r. So if n≥ r, then the sums of the E_π over the conjugacy classes form a basis of D^r()^G=(^⊗ r)^G× S_r. The subspace D_s^r() consists of those elements u of D^r() for which (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(u)=0 for all i,j∈{1,…,n}^r such that (i_l j_l)_l∈{1,…,r} has at least p^s repetitions. First we observe that (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_π)=1 if j=i∘π and 0 otherwise. So, if we put E_S=∑_σ∈ SE_σ for S⊆ S_r, then (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_S)=|{σ∈ S | j=i∘σ}| mod p. We will now show the following: Let Λ⊆{1,…,r} be a set of p^s indices and let i,j∈{1,…,n}^r such that (i_l,j_l) is constant for l∈Λ. We extend the permutations in (Λ) to {1,…,r} by letting them fix the elements outside Λ. Let π∈ S_r. * If j i∘π or the centraliser C_(Λ)(π) of π in (Λ) does not contain a p^s-cycle, then (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_(Λ)·π)=0. * If j = i∘π and C_(Λ)(π) contains a p^s-cycle, then Λ is π-stable, and (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_(Λ)·π) is 1 if π|_Λ=𝕀, -1 if π|_Λ is a product of p^s-1 disjoint p-cycles, and 0 otherwise.[The reader may want to check that the centraliser of a product of s disjoint t-cycles always contains an st-cycle.] Let Ω be the set of permutations π with j=i∘π. Note that Ω is C_S_r(i)× C_S_r(j)-stable, so (Λ) acts on Ω by conjugation. (i). If j i∘π, then j i∘ρ for all ρ∈(Λ)·π. Therefore we have (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_(Λ)·π)=0. Now assume that j=i∘π. Then (Λ)·π⊆Ω and (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_(Λ)·π)=|(Λ)·π| p. So it suffices to show that (Λ)·π has size divisible by p. Now also assume that C_(Λ)(π) does not contain a p^s-cycle. Then the same holds for C_(Λ)(ρ) for all ρ∈(Λ)·π. Now let σ∈(Λ) be any p^s-cycle. Then σ is a p-group and all σ-orbits on (Λ)·π have size divisible by p. So (Λ)·π has size divisible by p. (ii). Since j = i∘π, we have (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_(Λ)·π)=|(Λ)·π| p, as we have seen in the proof of (i). Let σ∈ C_(Λ)(π) be a p^s-cycle. Then Λ is π-stable, since π commutes with σ. So Λ is a union of π-orbits. These orbits are permuted transitively by σ. So they all have the same size, p^t say, t∈{0,…,s}. We have |(Λ)·π| = |(Λ)·(π|_Λ)| = p^s!/(p^t)^p^s-tp^s-t!, see <cit.>. If we apply the p-adic valuation to this we get by Legendre's Theorem p^s-1/p-1 - ( tp^s-t+p^s-t-1/p-1) . If t=0, then π|_Λ=𝕀 and |(Λ)·π|=1. Now assume t=1. Then π|_Λ is a product of p^s-1 disjoint p-cycles. Clearly, |(Λ)·π| is nonzero mod p (the p-adic valuation is zero), so we may assume that p>2. For each a∈{1,…,p-1} we count how often a p-power multiple of a number with remainder a mod p occurs in the list p^s,p^s-1,…,p^s-1+1 of factors of p^s!/p^s-1!. It occurs as a+bp for b=p^s-2,…,p^s-1-1, as ap+bp^2 for b=p^s-3,…,p^s-2-1,…, as ap^s-2+bp^s-1 for b=1,…,p-1 and finally as ap^s-1 for a>1 and as p^s for a=1. That is in total (p^s-1-p^s-2) + (p^s-2-p^s-3) +⋯+ (p-1) + 1 = p^s-1 times. The product of the nonzero numbers in the prime field is -1. So |(Λ)·π| = (-1)^p^s-1= -1 p. Finally assume that t≥ 2. Then we have to show that p^s-1/p-1 > tp^s-t+p^s-t-1/p-1, i.e. p^s> tp^s-t+1- tp^s-t+ p^s-t, i.e. that p^t>tp-t+1. This we do by induction on t. For t=2 this follows from the fact that p>2-1/p. Now assume it holds for t. Then we have p≥ 2>1+1/p^t-1-1/p^t. So p^t+1 > p^t+p-1 > tp-t+1+p-1 = (t+1)p-(t+1)+1. So (Λ)·π has size divisible by p. How to find an st-cycle in the centraliser of a product π=τ_1...τ_s of s disjoint t-cycles for s,t≥2. Note that the τ_j are in the centraliser of π. Let A=a_1,...,a_st be the set of non-fixed points of π. Then A is the disjoint union of the s t-subsets A_1,...,A_s, where A_j is the set of non-fixed points of τ_j. After renumbering the a_i we may assume that τ_1=(a_1,..,a_t), a_2=(a_t+1,..,a_2t) etc Let σ_i, i∈1,...,s be the s-cycle which maps a_(j-1)t+i∈A_j to a_(jt+i∈A_j+1, j<s and a_(s-1)t+i∈A_s to a_i∈A_1. Then the σ_i are mutually disjoint, are in the centraliser of πand τ_1σ_1σ_2...σ_s is an st-cycle in the centraliser of π. So for i,j and Λ as in the lemma, the (Λ)-orbits S for which the value (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_S) is nonzero, leave Λ stable and come in “associated pairs": one has cycle structure 1^p^s on Λ and value 1, the other has cycle structure p^p^s-1 on Λ and value -1. When T is an s-equivalence class, then E_T can be written as a sum of certain E_S, S a (Λ)-orbit and with any such orbit which has nonzero value the associated orbit is also present, so (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(E_T)=0. It follows that E_T∈ D_s(). Now assume that n≥ r. Let Λ⊆{1,…,r} be a set of p^s indices, assume π∈ S_r stabilises Λ, π|_Λ is a product of p^s-1 disjoint p-cycles and π'∈ S_r is the identity on Λ and equal to π outside Λ. Denote the S_r-conjugacy class of σ∈ S_r by [σ]. Note that [π][π']. Recall from our discussion in Section <ref> that the E_[σ] form a basis of D^r()^G. To prove the theorem it is enough to show that for any Λ, π and π' as above, and any u∈ D_s^r()^G, E_[π] and E_[π'] occur with the same coefficient in u. Define i ∈{1,…,n}^r by i_l=l for l∈{1,…,r}Λ and i_l=min(Λ) for l∈Λ. Put j=i∘π=i∘π'. By our definition of i and j, j=i∘σ implies σ=π outside Λ. So the (Λ)-orbits of π and π' form the only associated pair (relative to i,j and Λ) and the only (Λ)-orbit S in [π] resp [π'] for which E_S has nonzero value is that of π resp. π'. So for u∈ (D^r)^G, written as a linear combination of the E_[σ], (x_i_1 j_1⊗⋯⊗ x_i_r j_r)(u) equals the coefficient of E_[π'] minus the coefficient of E_[π]. This ends the proof of the theorem. * The sums of the divided p_λ's over the s-equivalence classes of the partitions of r occur in (D_s^r)^G, and when n≥ r they form a basis of (D_s^r)^G. * The divided h_λ's and the divided e_λ's, both with λ=1^m_12^m_2⋯ such that m_1<p^s, occur in (D_s^r)^G, and when n≥ r they form two bases of (D_s^r)^G. (i). This is just a reformulation of Theorem <ref>, where we now work in the divided power algebra D of ^* rather than . As we have seen in Section <ref> divided p_λ corresponds to the sum of the E_π over the conjugacy class labelled by λ. (ii). Since these two families are independent, see Section <ref>, and have the same cardinality as the basis from part (i), it is enough to show that they lie in D_s. Recall that D_s is spanned by the divided power monomials in the x_ij's with exponents <p^s. Both the divided h_λ's and the divided e_λ's are products of divided powers with exponent < p^s of e_1=h_1 and divided powers of elements in the span B⊆ D_1 of the divided power monomials in the x_ij's of degree ≥ 2 and with exponents <p. Using (<ref>) it follows that γ_i(e_1)∈ D_s for all i<p^s. So it is enough to show that B is stable under all divided powers γ_i, i≥ 1. This follows from Lemma <ref>. The monomials ∏_i=1^ne_i^(m_i), m_1<p^s, occur in D_s^G. Furthermore, for r≤ n, those with ∑_i=1^nim_i=r form a basis of (D_s^r)^G. This is just a reformulation of the statement about the e_λ's in Theorem <ref>. 1. Let A^G denote the image of A^G in A_1=D_1. By the Veldkamp theorem for k[], see Section <ref>, A^G is also the image of A^ in A_1. Furthermore, by <cit.> it has the monomials in the e_i with exponents <p as a basis. From Corollary 1 it is clear that when n≥2p the first degree where a “new" invariant (i.e. not in A^G) shows up in A_1 is 2p. Indeed (A_1^2p)^G is the direct sum of the image of (A^2p)^G and ke_2^(p). In the introduction of <cit.> it is pointed out that A_1^ modulo A^G is isomorphic to H^1(G_1,I_1), where I_1 is the ideal from Section <ref>. We note that conjecturally A_1^ and A_1^G are the same, see the remarks after Conjecture <ref>. 2. For R a commutative ring, put A_1,R=R[(x_ij)_1≤ i,j≤ n]/(x_ij^p | 1≤ i,j≤ n). We define φ_p:A_1, Z^+→ A_1, Z, A_1, Z^+ the truncated polynomials without constant term, by φ_p(u)=u^p/p. Then φ_p descends to a map φ_p:A_1, F_p^+→ A_1, F_p. We have A_1, F_p=D_1, F_p, and when u∈ A_1, F_p^+ has no linear or constant term, then φ_p(u) can also be computed in the divided power algebra D_ Z by the same formula. Let u∈ D_ Z be a lift of u (without linear or constant term), let m≥0 be an integer, let m=∑_i=0^ta_ip^i be the p-adic expansion of m and write m! = qp^ν_p(m!), where p does not divide q. By Legendre's Theorem we have ν_p(m!)=∑_i=1^ta_ip^i - 1/p - 1=∑_i=1^ta_iν_p(p^i!). So u^(m)=1/q∏_i=1^t( u^p^i/p^ν_p(p^i!))^a_i =1/q∏_i=1^t(φ_p^i( u))^a_i and therefore u^(m)=1/q∏_i=1^t(φ_p^i(u))^a_i . In particular, any divided power monomial ∏_i=1^ne_i^(m_i) with m_1<p can be expressed as a monomial in e_1,…,e_n together with the iterates of φ_p on e_2,…,e_n. §.§ Infinitesimal invariants Let V=k^n be the natural module for G, let r,t≥1 with n≥ r,t, and put W=rV⊕ tV^*. For i∈{1,…,r} and j∈{1,…,t} let x_i:W→ V and y_j:W→ V^* be the i-th vector component and j-th covector component function and x_i,y_j=((v,w)↦ w_j(v_i))∈ k[W]^G be the bracket function. * The monomials in the x_i,y_j with exponents <p form a basis of k[W]^ over k[W]^p. * ((V^⊗ r(V^*)^⊗ t)^*)^=((V^⊗ r(V^*)^⊗ t)^*)^G. (i). We will verify the hypotheses of <cit.>. Using the notation in <cit.> we have that _(W)=-min_x∈ W_x=n^2-(n-r)(n-t)=(r+t)n-rt, since n≥ r,t, and (W)-_(W)=rt. Let U⊆ W be the set of points (v,w)∈ W where the differentials d_(v,w) x_i,y_j are linearly dependent. We have d_(v,w) x_i,y_j = ((z,u) ↦ v_i,u_j+ z_i,w_j) = f_j(v_i)+g_i(w_j)∈ W^*=rV^*⊕ tV, where f_j embeds V in the (r+j)-th position in W^* and g_i embeds V^* in the i-th position of W^*. It is now easy to check that the differentials of the x_i,y_j at (v,w) will be independent if v∈ rV is independent or if w∈ tV^* is independent. Since n≥ r,t we can indeed choose v and w like this, so we obtain that (W U)≥ 2. (ii). This follows from (i), since ((V^⊗ r(V^*)^⊗ t)^*)^ consists of the multilinear functions in k[W]^, so the p-th powers cannot be involved. Assume r≤ n and put N=(p^s-1)n^2. Then (D^r)^=(D^r)^G and (A_s^N-r)^=(A_s^N-r)^G for r≤ n. Since A_s^N-r≅ D_s^r as G-modules and D_s^r is a G-submodule of D^r, it is enough to prove the first assertion. Put V=k^n. Since D_s^r⊆ D^r⊆(^⊗ r)^*≅ (V^⊗ r(V^*)^⊗ r)^* it is enough to show that ((V^⊗ r(V^*)^⊗ r)^*)^ equals ((V^⊗ r(V^*)^⊗ r)^*)^G for r≤ n which follows from Lemma <ref>(ii). One can form the divided power algebra of a vector space V=k_ ZV_ Z where V_ Z is any free Z-module. If (x_i)_i∈ I is a basis of V_ Z one just has to work with monomials ∏_i∈ Ix_i^(m_i) with all but finitely many m_i zero. For a family of variables (x_i)_i∈ I we put D((x_i)_i∈ I)=D(k_ ZV_ Z) and D_s((x_i)_i∈ I)=D_s(k_ ZV_ Z) where V_ Z is the free Z-module on (x_i)_i∈ I. lim_⟵n(_nD_s)^_n=lim_⟵n(_nD_s)^_n=D_s(e_1) D((e_i)_i≥2), where D((e_i)_i≥2) is graded such that e_i^(m) has degree mi, and the limit is in the category of graded k-algebras. This follows from Proposition <ref> and Corollary 1 to Theorem <ref>. 1. The conclusion of Lemma <ref> does not hold when n<r or n<t. For example, if we have n=1, r≥2,t≥1 then x_1^hx_2^p-h, 1<h<p is a -invariant, but it doesn't occur in the k[X]^p-algebra generated by the x_iy_j. 2. I checked with the computer that (D^r)^ > (D^r)^G when p=2, n=2 or n=3, and r=n+1. In the first case I got 8>5, in the second case 31>23. When p=3, n=2, and r=5 I got 45>42. For p = 2, n = 2, r = 3 one can easily describe a -invariant in D^r()=(_n^⊗ r)^S_r which is not a G-invariant. One can take the sum of the 3 S_3-conjugates of (E_11+E_22) E_12 E_12, i.e. (E_11+E_22)E_12^(2). 3. Take n=2. Let H be the group of diagonal matrices in G and let be its Lie algebra. It is easy to check that the nonzero H-weights in A_1 are also nonzero for . So the H-action on A_1^ is trivial. Of course the same holds for all G-conjugates of H. From the density of the semisimple elements in H it now follows that A_1^=A_1^G. This argument was mentioned to me by S. Donkin. It is not difficult to show that (A_1^G)=3p^2-p/2 and that e_1=, e_2= and e_2^(2) generate A_1^G by reducing to the _2-case when p>2. One can see that dim (S^r(g)^*)^g > dim (S^r(g)^*)^G in general as follows: You use the following three facts: 1) f↦f^p embeds A_s into A_s+1. 2) A_s^r is dual to A_s^N-r for N=(p^s-1)n^2 (top degree). 3) S^r() surjects onto A_s^r, so (A_s^r)^* embeds in S^r(g)^*. Starting with a non G-invariant in A_1^r we apply 1) to obtain a -invariant in A_2^rp which is not G-invariant. Then we put it into an S^r(g)^* using 2) and 3). §.§ The restriction property Recall that there is a G-equivariant isomorphism D_1≅ A_1 of graded algebras. The algebras (_nA_1)_n≥1 have the infinitesimal restriction property. If this conjecture holds, then A_1^=A_1^G by Proposition <ref> and the monomials ∏_i=1^ne_i^(m_i), m_1<p, span A_1^ by Corollary 1 to Theorem <ref>. The point is that the restriction property allows us to reduce to the situation that n is ≥ the degree r. Conversely, if these monomials span A_1^, then A_1^=A_1^G and the algebras (_nA_1)_n≥1 have the infinitesimal restriction property. Note that by Remark <ref>.3 A_1^=A_1^G implies that the centre U^[p]()^ of U^[p]() is contained in the centre (G)^G of (G), see <cit.>. 1. We consider the surjectivity of the map (_NA_1)^_N→ (_nA_1)^_n, N>n. By Remark <ref>.3 it is surjective for n=2, since the generators there lift to any (_NA_1)^_N. I also checked that it is surjective for n=3 and p=2,3,5, n=4 and p=2, 3 (up to degree 8), n=5 and p=2 (up to degree 7), p=3 (up to degree 6). This was done by checking in each of these cases that the monomials from Corollary 1 to Theorem <ref>, span (_nA_1^r)^_n=(_nD_1^r)^_n. 2. We consider the conjecture A_1^=A_1^G. By Remark <ref>.3 it holds for n=2. I checked it with the computer for n=3 and p=2,3,5, n=4 and p=2, 3 (up to degree 7) and 5 (up to degree 6), n=5 and p=2 (up to degree 5), 3 (up to degree 5). 3. The algebras (_nA_s)_n≥1, s≥ 2, don't have the restriction property for the group or Lie algebra. I checked this for the restriction _3A_2^10→_2A_2^10 when p=2: (_2A_2^10)^_2 is spanned by x_11^2 x_12^3 x_21^3 x_22^2+x_11^3 x_12^2 x_21^2 x_22^3 and x_11^3 x_12^3 x_21^3 x_22+x_11^2 x_12^3 x_21^3 x_22^2+x_11 x_12^3 x_21^3 x_22^3, but the image in _2A_2 of (_3A_2^10)^_3, _3 the upper triangular matrices in _3, is spanned by the first element. 4. The algebras (_nD_s)_n≥1, s≥2, and (_nD)_n≥1 don't have the restriction property for the group or Lie algebra. By Proposition <ref> and Theorem <ref>(i) it is enough to check that the dimension of the span of the sums of the divided p_λ's is <(_nD_s^r)^G. First we consider the case n=2. For r=5, p=2 I got 1<2 for s=2 and 2<3 for s≥3, for r=8, p=3 I got 4<5 for s≥2, and for r=14, p=5 I got 7<8 for s≥2. In the case n=3, r=6, p=2 I got 4<5 for s=2 and 6<7 for s≥3. §.§ Dimensions of some of the A^r_s We give some dimensions that we calculated using a computer program. For n=2 the dimensions of the A^r_s were always given as the coefficients of the polynomial 1-T^p^s/1-T×1-T^3(p^s-1)+2/1-T^2∈ Z[T] which we calculate as 1-T^p^s/1-T^2×1-T^3(p^s-1)+2/1-T for p=2. The total dimension was always p^2s+p^s(p^s-1)/2. We checked the cases s=2, p=2,3,5 and s=3,p=2. For the case s=1, see Remark <ref>.3. In the table below we give dimensions for n≥3. Let A^G denote the image of A^G in A_s. The first row gives the dimensions of the (A^r_s)^G, the second row gives the dimensions of the graded pieces of A^G, and the third row, if it exists, gives the dimensions of the (A^r_s)^. If the dimensions can be computed in all degrees, then the single number to the right gives the total dimension. § SEVERAL MATRICES In this section we study the invariants in the algebras D_s(m^*), where m denotes the direct sum of m copies of . §.§ Conjugacy classes for the conjugation action of S_α on S_r We recall some notation and results from <cit.> about conjugacy classes of a Young subgroup in S_r. For a finite sequence i=(i_1,…,i_t) of elements of {1,…,m} we define Content( i) to be the m-tuple whose j-th component is the number of occurrences of j in i. We say that sequences i and j as above are equivalent if one is a cyclic shift of the other, we denote the equivalence class of i by [ i] and we put |[ i]|=t. We will call these equivalence classes cycle patterns. Clearly, equivalent sequences have the same content, so the content function is also defined on cycle patterns. For l≥1 we define the l-th power of i by [ i]^l=[i_1,…,i_t,…,i_1,…,i_t_l copies of i] . We call a cycle pattern primitive if it is not the l-th power of another cycle pattern for some l≥2 and we denote the set of primitive cycle patterns by Φ. Let P be the set of partitions. For λ=(λ_1,λ_2,…)∈ P we put |λ|=∑_i≥1λ_i and we denote the length of λ, i.e. the number of nonzero parts of λ, by l(λ). For a function λ:Φ→ P such that all but finitely many values are the empty partition we define the content of λ to be ∑_b∈Φ|λ(b)| Content(b) and we denote the set of such functions with content α by Θ_α. Now fix a composition α=(α_1,…,α_m) of r. For i∈{1,…,m} put Δ_i={j∈ Z | ∑_l=1^i-1α_l<j≤∑_l=1^iα_l}. Define ζ:{1,…,r}→{1,…,m} by ζ(j)=i when j∈Δ_i. Let S_α be the simultaneous stabiliser of the Δ_i in S_r. Note that S_α≅ S_α_1×⋯× S_α_m. For a cycle σ=(i_1,…,i_t)∈ S_r we put [σ]=[ζ(i_1),…,ζ(i_t)]. We can associate to every π with disjoint cycle decomposition π=∏_j∈ Jσ_j the multiset of cycle patterns [σ_j] | j∈ J. This multiset is equal to b^λ(b)_i | b∈Φ,1≤ i≤ l(λ(b)) for a unique λ∈Θ_α which we call the S_α cycle type of π. Clearly π,π'∈ S_r are S_α-conjugate if and only if they have the same S_α cycle type. §.§ Partial polarisation Let α, r, the Δ_i, S_α and ζ be as in the previous section and let V be a vector space over k. The algebra S(mV)=S(V)^ m is Z^m-graded and we denote the piece of degree α by S^α(mV). We apply analogous notation to the algebras S(mV^*), D(mV) and D_s(mV). Note that S^α(mV)≅ S^α_1(V)⊗⋯⊗ S^α_m(V), so S^α(mV)^* can be regarded as the r-linear functions rV→ k which are symmetric in each of the sets of positions Δ_i, i.e. which are S_α-invariants. For an integer t≥0 let 1_t denotes the all-one vector of length t. The partial polarisation map P_α:S^α(mV^*)→ S^α(mV)^* sends f∈ S^α(mV^*) to the multi-homogeneous component of degree (1_α_1,…,1_α_m) of the r-variable polynomial function (v^1_1,…,v^1_α_1,…,v^m_1,…,v^m_α_m)↦ f(v^1_1+⋯+v^1_α_1,…,v^m_1+⋯+v^m_α_m) . If F:rV→ k is r-linear and f=((v_1,…,v_m)↦ F(v_ζ(1),…,v_ζ(r))), then P_α(f)=((v_1,…,v_r)↦∑_σ∈ S_αF(v_σ(1),…,v_σ(r))) . As in Section <ref> we obtain isomorphisms D^α(mV^*)≅ S^α(mV)^*. Under these isomorphisms D_s^α(mV^*) can be regarded as the r-linear functions rV→ k which are symmetric in each of the sets of positions Δ_i and which vanish when the arguments in p^s positions within a Δ_i are the same. Furthermore, these isomorphisms are compatible with the isomorphism D(mV^*)≅ S(mV)^* gr from Section <ref>. §.§ Invariants in the algebra D(m^*) We keep the notation of Section <ref>. For f∈ k[]^G and b=[i_1,…,i_t] a cycle define f_b∈ k[m]^G by f_b(x_1,…,x_m)=f(x_i_1⋯ x_i_t) . For λ∈Θ_α define p_λ=∏_b∈Φp_λ(b),b, e_λ=∏_b∈Φe_λ(b),b and h_λ=∏_b∈Φh_λ(b),b. Furthermore define u_λ=∏_b∈Φu_λ(b) and z_λ=∏_b∈Φz_λ(b), and call 1/z_λp_λ, 1/u_λh_λ and 1/u_λe_λ divided p_λ, h_λ and e_λ. As shown in <cit.> z_λ is the order of the centraliser in S_α of an element in S_r of S_α cycle type λ. Clearly the divided h_λ and e_λ can be considered as elements of D^α(m^*)^G. We will now show that the same holds for the divided p_λ and that, for n≥ r, they form three bases of D^α(m^*)^G. First we note that for b∈Φ the map f↦ f_b can be defined over Q and then it maps divided power Z-form into divided power Z-form. So for each b∈Φ, the three families (1/u_λh_λ,b)_λ∈ P, (1/u_λe_λ,b)_λ∈ P and (1/z_λp_λ,b)_λ∈ P have the same Z-span in in D((m_ Q)^*). But then the same holds for the three families (1/u_λh_λ)_λ∈Θ_α, (1/u_λe_λ)_λ∈Θ_α and (1/z_λp_λ)_λ∈Θ_α. Now let π∈ S_r be of S_α cycle type λ. Then it is easy to see that p_λ=((X_1,⋯,X_m)↦ f_π(X_ζ(1),…,X_ζ(r))), f_π as in Section <ref>. So as an element of S^α(m_ Q)^*, via the partial polarisation map P_α, it is ((X_1,⋯,X_r)↦∑_σ∈ S_αf_π(X_σ(1),…,X_σ(r)))=∑_σ∈ S_αf_σπσ^-1. So under the S_r-equivariant isomorphism π↦ f_π:kS_r→((^⊗ r)^*)^G the sum of the conjugacy class [π]_S_α corresponds to 1/z_λp_λ. So the divided p_λ, λ∈Θ_α, form a basis of D^α(m^*)^G=((^⊗ r)^*)^G× S_α, and the same must then hold for the other two families. §.§ Invariants in the algebras D_s(m^*) We keep the notation of Section <ref>. Call λ∈Θ_α s-reduced if λ([j]) has <p^s ones for all j∈{1,…,m}. To λ∈Θ_α we can associate its s-reduced form by repeatedly replacing p^s occurrences of 1 in a λ([j]) by p^s-1 occurrences of p. We will call two elements of Θ_α s-equivalent if they have the same s-reduced form. Call two elements of the symmetric group S_r (s,α)-equivalent if their S_α cycle types are s-equivalent. As in Section <ref> we can now show that the sums of the E_π over the (s,α)-equivalence classes occur in D_s(m)^G, and when n≥ r they form a basis of D_s^α(m)^G. We only need the lemma in the proof of Theorem <ref> for sets Λ that are contained in one of the Δ_i. The proof of the theorem below is completely analogous to that of Theorem <ref> and we leave this to the reader as well. * The sums of the divided p_λ's over the s-equivalence classes in Θ_α occur in D_s^α(m^*)^G, and when n≥ r they form a basis of D_s^α(m^*)^G. * The divided h_λ's and the divided e_λ's, both with λ∈Θ_α such that λ([j]) has <p^s ones for all j∈{1,…,m}, occur in D_s^α(m^*)^G, and when n≥ r they form two bases of D_s^α(m^*)^G. The monomials ∏_1≤ i≤ n,b∈Φe_i,b^(m_i,b), m_1,[j]<p^s for j∈{1,…,m}, occur in D^r_s(m^*)^G. Furthermore, for r≤ n, those with ∑_1≤ i≤ n,b∈Φm_i,b|b|=r form a basis of D^r_s(m^*)^G. Given that D^r_s(m^*) is the direct sum of the D_s^α(m^*), α∈ Z^m a composition of r, this is just a reformulation of the statement about the e_λ's in Theorem <ref>. Assume r≤ n. Then D^r_s(m^*)^=D^r_s(m^*)^G. For α a composition of r we have D_s^α(m^*) is a G-submodule of (^⊗ r)^*, so this follows as in the proof of Proposition <ref>. lim_⟵n(D_s(m_n^*))^_n=lim_⟵n(D_s(m_n^*))^_n=D_s((e_1,[j])_1≤ j≤ m) D((e_i,b)_i or |b|≥2), where D((e_i,b)_i or |b|≥2) is graded such that e_i,b^(t) has degree ti|b|, and the limit is in the category of graded k-algebras. This follows from Proposition <ref> and Corollary 1 to Theorem <ref>. § VECTORS AND COVECTORS Let V=V_n=k^n be the natural module for G, let m_1,m_2≥0 be integers and put W=W_n=m_1V⊕ m_2V^*. In this section we study the invariants in the algebras D_s(W^*). For i∈{1,…,m_1} and j∈{1,…,m_2} let x_i:W→ V and y_j:W→ V^* be the i-th vector component and j-th covector component function and x_i,y_j=((v,w)↦ w_j(v_i))∈ k[W]^G be the bracket function. By Section <ref> these bracket functions can also be considered as elements of D(W^*)^G. The algebra S(W) is Z^m× Z^m-graded and Z× Z-graded and we denote the piece of multidegree (α^1,α^2) by S^α^1,α^2(W) and the piece of bidegree (r_1,r_2) by S^r_1,r_2(W). We apply analogous notation to the algebras S(W^*), D(W^*) and D_s(W^*). Let r_1,r_2≥0 be integers and let α^1=(α^1_1,…,α^1_m_1) and α^2=(α^2_1,…,α^2_m_2) be compositions of r_1 and r_2. As in Section <ref> we associate to these Δ^1_i, i∈{1,…,m_1}, Δ^2_j, j∈{1,…,m_2}, ζ_1:{1,…,r_1}→{1,…,m_1}, ζ_2:{1,…,r_2}→{1,…,m_2}, and S_α^1,S_α^2≤ S_r. We have a partial polarisation map P_α^1,α^2:S^α^1,α^2(W^*)→ S^α^1,α^2(W)^*=((V^⊗ r_1⊗(V^*)^⊗ r_2)^*)^S_α^1× S_α^2 . If F:r_1V⊕ r_2V^*→ k is multilinear and f equals ((v_1,…,v_m_1,w_1,…,w_m_2)↦ F(v_ζ_1(1),…,v_ζ_1(r_1),w_ζ_2(1),…,w_ζ_2(r_2))) , then P_α^1,α^2(f) equals ((v_1,…,v_r_1,w_1,…,w_r_2)↦∑_σ∈ S_α^1,τ∈ S_α^2F(v_σ(1),…,v_σ(r_1),w_τ(1),…,w_τ(r_2))) . As in Section <ref> we obtain isomorphisms D^α^1,α^2(W^*)≅ S^α^1,α^2(W)^*. Under these isomorphisms D_s^α^1,α^2(W^*) can be regarded as the multilinear functions r_1V⊕ r_2V^*→ k which are symmetric in each of the sets of vector positions Δ^1_i and in each of the sets of covector positions Δ^2_i, and which vanish when the arguments in p^s positions within a Δ^ι_i, ι∈{1,2}, are the same. Furthermore, these isomorphisms are compatible with the isomorphism D(W^*)≅ S(W)^* gr from Section <ref>. Assume now that α^1 and α^2 above are compositions of r. The group S_r× S_r acts on S_r via (σ,τ)·π=σπτ^-1. Each S_α^1× S_α^2-orbit has a unique representant π such that π is increasing on each Δ^2_j and π^-1 is increasing on each Δ^1_i. Let π∈ S_r. Put Δ^1_ij=Δ^1_i∩π(Δ^2_j) and m_ij=|Δ^1_ij| for 1≤ i≤ m_1, 1≤ j≤ m_2. Then α^1_i=∑_j=1^m_2m_ij and α^2_j=∑_i=1^m_1m_ij . For σ,τ∈ S_r we have (σ,τ)∈ S_α^1× S_α^2 and σπτ^-1=π if and only if σ∈ S_α^1∩π S_α^2π^-1 and τ=π^-1σπ. So the S_α^1× S_α^2-centraliser of π has size |S_α^1∩π S_α^2π^-1|=∏_1≤ i≤ m_1, 1≤ j≤ m_2m_ij!. Conversely, if we are given integers m_i,j≥0, 1≤ i≤ m_1, 1≤ j≤ m_2, which sum to r, then we can define α^1 and α^2 by (<ref>) and we can define the Δ^1_i and Δ^2_j as before. We divide each Δ^1_i into m_2 consecutive intervals Δ^1_i1,…,Δ^1_im_2 and we divide each Δ^2_j into m_1 consecutive intervals Δ^2_1j,…,Δ^2_m_1j such that Δ^1_ij and Δ^2_ij have length m_ij. We put Δ^1_ij =∑_q=1^i-1α^1_q+∑_q=1^j-1 m_iq+{l∈ Z | 1≤ l≤ m_ij} and Δ^2_ij =∑_q=1^j-1α^2_q+∑_q=1^i-1 m_qj+{l∈ Z | 1≤ l≤ m_ij} Now we define π∈ S_r by requiring that π:Δ^2_ij→Δ^1_ij is increasing. Then π is increasing on each Δ^2_j and π^-1 is increasing on each Δ^1_i. Let r_1,r_2≥0 be integers. * If r_1 r_2, then D^r_1,r_2(W^*)=0. If r_1=r_2=r, then the divided power monomials in the x_i,y_j of bidegree (r,r) occur in D_1^r_1,r_2(W^*)^G, and when n≥ r they form a basis of D^r,r(W^*)^G=D_1^r,r(W^*)^G. * If n≥ r_1,r_2, then D^r_1,r_2(W^*)^=D^r_1,r_2(W^*)^G. (i). By considering the action of the centre of G it follows that if r_1 r_2, then D^r_1,r_2(W^*)^G=0, so we assume now that r_1=r_2=r. By Lemma <ref> the given monomials occur in D_1^r,r(W^*). Denote the vector and covector component functions of rV⊕ rV^* by x_i and y_i, i∈{1,…,r}. The function f_π∈(^⊗ r)^* from Section <ref> can also be seen as an element of (V^⊗ r⊗(V^*)^⊗ r)^*. Then we have f_π=∏_i=1^r x_π(i), y_i and we see that the map π↦ f_π is S_r× S_r-equivariant. Let m_i,j≥0, 1≤ i≤ m_1, 1≤ j≤ m_2, be integers which sum to r. Define α^1 and α^2 by (<ref>) and then define Δ^1_i, Δ^2_j, ζ_1, ζ_2, S_α^1, S_α^2 as in Section <ref>, and define π as before the proposition. It is easy to see that ∏_1≤ i≤ m_1,1≤ j≤ m_2 x_i,y_j^m_ij=∏_i=1^r x_ζ_1(π(i)),y_ζ_2(i). So as an element of S^r,r(W_ Q)^*, via the partial polarisation map P_α^1,α^2, it is ∑_σ∈ S_α^1,τ∈ S_α^2∏_i=1^r x_σ(π(i)), y_τ(i)=∑_σ∈ S_α^1,τ∈ S_α^2f_σπτ^-1. So under the S_r× S_r-equivariant isomorphism π↦ f_π:kS_r→((V^⊗ r⊗(V^*)^⊗ r)^*)^G the sum of the orbit [π]_S_α^1× S_α^2 corresponds to ∏_1≤ i≤ m_1,1≤ j≤ m_2 x_i,y_j^(m_ij). So these divided power monomials form a basis of D^r,r(W^*)^G=⊕_α^1,α^2D^α^1,α^2(W^*)^G. (ii). As D^α^1,α^2(W)^*=((V^⊗ r_1⊗(V^*)^⊗ r_2)^*)^S_α^1× S_α^2, this follows from Lemma <ref>(ii). Note that we have a natural embedding V_n-1↪ V_n by adding a zero component in the n-th position, and a natural embedding V_n-1^*↪ V_n^* by extending a function f∈ V_n-1^* by sending the n-th standard basis vector to 0. This gives us a natural embedding W_n-1↪ W_n, and we get restriction maps for the algebras (k[W_n])_n≥1, (D(W_n^*))_n≥1 and (D_s(W_n^*))_n≥1. From the previous proposition we immediately obtain the following corollary, where we may omit the subscript s. lim_⟵n(D_s(W_n^*))^_n=lim_⟵n(D_s(W_n^*))^_n=D( x_i,y_j_1≤ i≤ m_1,1≤ j≤ m_2), where the grading is such that x_i,y_j^(t) has degree 2t, and the limit is in the category of graded k-algebras. 1. It is immediate from classical invariant theory, see <cit.>, that the algebras (k[W_n])_n≥1 have the restriction property. 2. Since W_n≅(m_2V_n⊕ m_1V_n^*)^*, we get restriction maps W_n→ W_n-1. From the description of ⋀(W_n)^G in <cit.> it is clear that the algebras ⋀(W_n)_n≥1 have the restriction property. This implies that when p=2, the algebras (A_1(W_n))_n≥1 have the restriction property. 3. For p=3 the algebras (D(W^*_n))_n≥1 and (D_s(W^*_n))_n≥1 don't have the restriction property. I checked with the computer for p=3,n=2,m_1=1,m_2=3 that D^r_1(W^*_n)=1, 0, 3, 0, 6, 0, 11, 0, 15 for r=0,…,8 and 0 for r>8, and that the dimensions of the span of the invariants from Proposition <ref> in degrees =0,…,8 are 1, 0, 3, 0, 6, 0, 10, 0, 15. In degree 6 the invariant x_1x_2(x_1y_21 - x_2y_22) (y_12y_31- y_11y_32) is outside this span, where y_ji denotes the i-th component of the j-th covector. 4. Similar to <cit.> one could try to determine the invariants in A_1(W_n)=D_1(W_n^*) by using the isomorphism A_1(W_n)≅ A_1(mV_n^*)⊗^m_1(1-p), m=m_1+m_2, of _n-modules, and then use the commuting _m-action. Let U_n≤_n be the subgroup of upper uni-triangular matrices. Then we get A_1(W_n)^_n≅ A_1(mV_n^*)^U_n_m_1(p-1) 1_n, where 1_n is the all-one vector of length n. Now one could hope that A_1(mV_n^*)^U_n_(p-1)ν≅Δ__m((p-1)ν^T), Δ__m(μ) the Weyl module of highest weight μ and ν^T the transpose of ν, at least for ν a multiple of 1_n. Indeed the analogue for the exterior algebra holds by <cit.> or <cit.>. However, in the case p=3,n=2,m_1=1,m_2=3, A_1(4V_2^*)^U_2_(2,2) is not even a quotient of some Weyl module. Indeed its socle and ascending radical series both have two layers: the first one is the irreducible L__4(2,2,0,0) of dimension 19 and the second layer is L__4(1,1,1,1)⊕ L__4(4,0,0,0) of dimension 1+16=17. The Weyl module Δ__4(4,0,0,0) has dimension 35 and the two layers of its socle and ascending radical series are L__4(2,2,0,0) and L__4(4,0,0,0). 99 AR A. M. Adamovich, G. L. Rybnikov, Tilting modules for classical groups and Howe duality in positive characteristic, Transform. Groups 1 (1996), no. 1-2, 1-34. ABW K. Akin, D. A. Buchsbaum, J. Weyman, Schur functors and Schur complexes, Adv. in Math. 44 (1982), no. 3, 207-278. Bou N. Bourbaki, Algèbre, Chaps. 1, 2 et 3, Hermann, Paris, 1970. DeCProc C. De Concini, C. Procesi, A characteristic free approach to invariant theory, Advances in Math. 21 (1976), no. 3, 330-354. Don S. Donkin, Invariant functions on matrices, Math. Proc. Cambridge Philos. Soc. 113 (1993), no. 1, 23-43. FP E. M. Friedlander and B. J. Parshall, Rational actions associated to the adjoint representation, Ann. Sci. École Norm. Sup. (4) 20 (1987), no. 2, 215-226. Hab W. J. Haboush, Central differential operators on split semisimple groups over fields of positive characteristic, Séminaire d'Algèbre Paul Dubreil et Marie-Paule Malliavin, 32ème année (Paris, 1979), pp. 35-85, Lecture Notes in Math. 795, Springer, Berlin, 1980. Jan J. C. Jantzen, Representations of algebraic groups, Pure and Applied Math., vol. 131. Academic Press, Boston, 1987. Mac I. G. Macdonald, Symmetric functions and Hall polynomials, Second edition, Oxford University Press, New York, 1995. Pr A. Premet, Special transverse slices and their enveloping algebras, Adv. Math. 170 (2002), no. 1, 1–55. PrT A. A. Premet and R. H. Tange, Zassenhaus varieties of general linear Lie algebras, J. Algebra 294 (2005), no. 1, 177-195. Skr S. Skryabin, Invariants of finite group schemes, J. London Math. Soc. (2) 65 (2002), no. 2, 339-360. T R. Tange, On the first restricted cohomology of a reductive Lie algebra and its Borel subalgebras, Ann. Inst. Fourier (Grenoble) 69 (2019), no. 3, 1295-1308.
http://arxiv.org/abs/2307.00219v1
20230701043510
Iterative conditional replacement algorithm for conditionally specified models
[ "Kun-Lin Kuo", "Yuchung J. Wang" ]
stat.CO
[ "stat.CO" ]
Iterative conditional replacement algorithm for conditionally specified models Kun-Lin Kuo Institute of Statistics, National University of Kaohsiung, Kaohsiung, Taiwan and Yuchung J. WangCorresponding author: [email protected] Department of Mathematical Sciences, Rutgers University, Camden, NJ, USA ============================================================================================================================================================================================================================================================= The sample-based Gibbs sampler has been the dominant method for approximating joint distribution from a collection of compatible full-conditional distributions. However for conditionally specified model, mixtures of incompatible full and non-full conditional distributions are the realities; but, their updating orders are hard to identified. We propose a new algorithm, the Iterative Conditional Replacement (ICR), that produces distributional approximations toward the stationary distributions, dispensing Markov chain entirely. ICR always converges, and it produces mutually stationary distributions, which will be consistent among one another when the conditional distributions are compatible. Examples show ICR to be superior in quality, while being more parallelizable and requiring little effort in monitoring its convergence. Last, we propose an ensemble approach to decide the final model. Keywords: Dependency network; I-projection; Method of alternating projection; Mutually stationary distributions; Unsupervised leaning. § INTRODUCTION Using the two cultures of <cit.>, the assumption of a joint distribution is data modeling, whereas conditionally specified model (CSM)—specifying a joint distribution via conditional distributions—belongs to the camp of algorithmic modeling. A typical example is in multiple imputation: explicit full multivariate (Bayesian) models versus MICE <cit.>. However, Markov random field <cit.>, spatial modeling <cit.>, and dependency networks <cit.> had been shown that the conditional approach offers certain advantages. CSM can be used to compose joint models from data collected over spatial ranges or temporal stages, because it would be unrealistic to simultaneously articulate a joint model for a large number of variables. A better is to locally model a small number of variables, then combine those submodels into a joint model, like embedding pieces of a jigsaw puzzle into a complete picture. Our algorithm will make the process of modeling locally and synthesizing globally easier. Formally, CSM determines a joint distribution for 𝕏=(x_1,…,x_d) after three stages of maneuvers: * Conditional modeling: Built a predictive conditional model from data for every x_i ≡{i} using a subset of 𝕏\{x_i}≡{-i} as the predictors via a regularized modeling or machine learning algorithm, such as regression, classification, or a neural network. Let the learning outcome be {f_i|c_i: 1 ≤ i ≤ d}, where c_i ⊆{-i}. Or more directly, a conditional model, { f_a_i|b_i: 1 ≤ i ≤ L }, has already been formulated by domain experts using subject matter knowledge and algorithms of her choice, where a_i and b_i are non-intersecting subsets of 𝕏. For spatial data, c_i (b_i) is commonly known as the “neighbors” of x_i (a_i); in general, c_i (b_i) is the covariates used to predict x_i (a_i). * Synthesize (from local to global): Embed the conditional distributions, {f_i|c_i: 1 ≤ i ≤ d} or {f_a_i|b_i: 1 ≤ i ≤ L }, into joint distributions of 𝕏. Nodes of 𝕏 may be divided into groups. Within each group, the synthesis produces intermediate distribution. These intermediate distributions then propagate in phases to the entire 𝕏, with the sequential orders of propagation playing a critical role. * Optimize: Different sequences to propagate the intermediate distributions may result in different joint distributions. The entire collection of stationary joint distributions, produced in Stage II, make up an ensemble, and it is the ensemble that makes the final model of 𝕏. The final outcome of a CSM will depend on both the data and the algorithms used in the three stages. Here, we propose an algorithm to divide and to synthesize, and recommend another algorithm for the optimization attendant to Stage III. Absent the concerns of Stages II and III, much algorithmic creativity remains available in Stage I. A conditional model of Stage I is said to be compatible if a joint distribution exists, from which every conditional or marginal distribution can be derived. In such a circumstance, the output of a synthesis should be unique. Moreover, a CSM is said to be sufficient if it has enough information to identify a joint distribution of 𝕏. A conditional distribution involving all the variables in 𝕏 is called a full-conditional and is expressed as f_i|-i or f_a_i|-a_i; otherwise, it is a non-full conditional: f_a_i|b_i, a_i∪ b_i 𝕏. When the CSM is {f_i|-i: 1 ≤ i ≤ d} and the Gibbs sampler (GS) is used for synthesis, there can be up to d! (systematic scan) stationary distributions, one for each permutation of (1,…,d) <cit.>. Most CSM papers only consider full-conditional models that mimic the Bayesian computation <cit.>. However, proposing a full-conditional for every variable of 𝕏 is impractical; in stead, a mixture of full and non-full conditionals is a more realistic approach. Therefore, practical synthesis must be able to accommodate combinations of full and non-full conditionals. <cit.> invented partially collapsed Gibbs sampler (PCGS): the GS based on combinations of compatible full and non-full conditionals. They discovered that PCGS must follow specific updating orders to draw correct samples. Another difference between Bayesian computation and CSM is that approximating the posterior distribution is not the main objective of GS, while joint distribution of 𝕏 is the only focus of CSM. Here, we invented the Iterative Conditional Replacement algorithm (ICR) which produces distributions, not samples. ICR will simultaneously compute several joints and/or marginal distributions regardless of compatibility and its convergence is guaranteed. When the CSM is compatible, ICR will approximate the unique stationary distribution; otherwise, the joint distributions would be many and different. More critically, we devise simple rules to identify all the permissible updating orders. The examples below show that ICR is computationally more robust and flexible than sample-based methods. Traditionally, compatibility must be confirmed before GS or PCGS sampling can start; otherwise, the Markov chains can become null. In contrast, ICR cycles through a permissible updating order, and produces mutually stationary distributions. Moreover, there are compatible and sufficient CSM, such as { f_1|23, f_2|13, f_3}, that PCGS cannot sample, because it cannot pass the dependence of (x_1,x_2) back to x_3. We propose “divide-then-ICR” strategy: first, the CSM is divided into suitable groups such that permissible updating orders within each group can be found; second, apply ICR to each group and produce (intermediate) distributions for subsets of 𝕏. Finally, use ICR again to combine intermediate distributions into joint distributions or marginal distributions. For example, { f_1|23 ,f_2|13, f_3} is first divided into { f_1|23, f_2|13} and { f_3}. From { f_1|23, f_2|13}, ICR computes two stationary π_12|3^(1,2) and π_12|3^(2,1), where the superscripts indicate different updating orders. We multiply either distribution by f_3 and get the two mutually stationary joint distributions: π_123^(1,2) and π_123^(2,1). If these two joints are equal, the original CSM is deemed compatible. The Stage III optimization is to find a mixture, απ_123^(1,2)+ (1-α)π_123^(2,1), that minimizes the deviance relative to the original CSM. In the past, there have been many algebraic proposals to verify the compatibility among full conditionals, for example, <cit.> and <cit.>. However, how to verify the compatibility between full and non-full conditionals is still very much an open problem. Here is a case that computations can answer algebraically difficult question; we prove that the CSM is compatible when the multiple stationary distributions computed by ICR are the same. In the examples below, benefits of ICR are highlighted by its capacity to handle (a) incompatible CSM; (b) reducible CSM whose support is partitioned; (c) the conditional density is sticky for GS to sample (slow mixing); and (d) the CSM that divide-then-ICR can synthesize, whereas PCGS cannot. ICR is introduced in Section <ref>, first for full conditionals, then for combinations of full and non-full conditionals. ICR is cyclically doing I-projections among spaces defined individually by each conditional distribution. Examples are in Section <ref>. Many times, ICR cannot be applied to a CSM directly; but partitioning a CSM into several smaller CSM enables ICR to be applied locally. Historical connections of ICR with other algorithms, such as GS, power method, and alternating projection are addressed in Section <ref>. Section <ref> contains a brief conclusion. § THE ITERATIVE CONDITIONAL REPLACEMENT ALGORITHM Hereafter, conditional and marginal distributions/densities will be abbreviated as conditional(s) and marginal(s). A joint density is denoted by p, q, π, f, or g without subscript, while their marginal and conditional densities have subscripts and are denoted as π_1, p_ij, q_a, q_-a, f_i|-i, g_12|34, where 1={x_1}, ij={x_i,x_j}, a={x_i: i∈ a}, -a={x_i: i ∉a}, i|-i={x_i|x_j, j i}, and 12|34 ={x_1,x_2|x_3,x_4}. We also reserve f_a_i|b_i and g_a_j|b_j for the conditional distributions in a CSM, p and q as the distributions produced during ICR iterations, and π^(i_1,…,i_d) for the stationary joint distribution updated in the order of (i_1,…,i_d). Moreover, let S(f) and S(f_i|-i) be the support of f and f_i|-i, respectively; S(q_a) be the support of q_a. We always assume S(f_j|-j) = S(f_i|-i) for all (i,j). A d-dimensional joint density f is said to satisfy the total positivity condition if S(f)= S(f_1)×⋯× S(f_d). We use Kullback-Leibler divergence, called K-L divergence hereafter, as the measure of deviance that drives ICR's search. The K-L divergence is defined as I(p;q)=∑_x p(x) logp(x)/q(x). §.§ ICR for conditionally specified models of full conditionals Let the CSM be {f_j|-j: 1 ≤ j ≤ d}, and (i_1 ,i_2, …, i_d) and (i_2, …, i_d, i_1) be two adjacent updating orders. <cit.> prove the following properties for {π^(i_1,…,i_d)}: * Stationary distributions π^(i_1 ,i_2, …, i_d) and π^(i_2, …, i_d, i_1), respectively, have f_i_d|-i_d and f_i_1|-i_1 as their conditionals; * π^(i_1, i_2, …, i_d)_-i_1 = π^(i_2, …, i_d, i_1)_-i_1; and * π^(i_1, i_2, …, i_d)_i_1 = π^(i_2, …, i_d, i_1)_i_1. Therefore, the goal of the algorithm is to formulate sequences of joint distributions that monotonically approximate the {π^(i_1, …,i_d)} such that they collectively fulfill (H1)–(H3). Requirements (H2) and (H3) are necessary for balancing the degrees of freedom between the CSM and the collection of all the stationary distributions. To illustrate, consider a simple CSM A={f_1|2,f_2|1}, and define C_1={f_1|2ω_2} and C_2={f_2|1ν_1}, where ω_2 and ν_1 are marginal densities of x_2 and x_1, respectively. Let q be a joint density having the same support of f_1|2. The K-L divergence between q and a τ= f_1|2τ_2 ∈ C_1 satisfies the Pythagoras equality: I(q;τ)=I(q; f_1|2q_2) + I(f_1|2q_2;τ), which is proved in Appendix <ref>. By choosing τ_2=q_2, I(f_1|2q_2;τ)=0 and minimization of I(q;τ) is achieved. Thus, I-projection of q=q_1|2 q_2 onto C_1, is f_1|2 q_2, so it is named conditional replacement. By the same token, the I-projection of q=q_2|1 q_1 onto C_2 is f_2|1q_1. Let the iterations begin from a q^(0). The following alternating I-projections between C_1 and C_2 produce two sequences of joints: q^(2k+1)=f_1|2q^(2k)_2 ∈ C_1 q^(2k+2)=f_2|1 q^(2k+1)_1 ∈ C_2, Throughout, (H1) holds for both { q^(2k+1)} and { q^(2k+2)}. The choices of q^(2k+1)_2= q^(2k)_2 and q^(2k+2)_1= q^(2k+1)_1 not only minimize the K-L divergence, but also satisfy (H2). Next, (H3) provides the metric to detect the convergence of ICR; I-projections will be stopped at t when q^(2t+1)_1=q^(2t)_1 and q^(2t+2)_2=q^(2t+1)_2. Numerically, stop ICR at t-th iteration when M(t)=I(q_1^(2t);q_1^(2t+1))+I(q_2^(2t+1);q_2^(2t+2)) < 10^-10. Upon convergence, we designate q^(2t+1) as π^(2,1)∈ C_1 and q^(2t+2) as π^(1,2)∈ C_2. The following proposition follows from Theorem <ref> to be proved later. Both I(π^(2,1); q^(2k+1)) and I(π^(1,2);q^(2k+2)) decrease to 0 as k→∞. Due to the total variation norm inequality, P-Q≤√(1/2 I(P;Q)), q^(2k) - π^(1,2)→ 0 and q^(2k+1) - π^(2,1)→ 0. π^(1,2) =π^(2,1) if and only if {f_1|2, f_2|1} are compatible. π^(1,2) =π^(2,1) implies C_1 ∩ C_2 ∅, thus compatible. When {f_1|2, f_2|1} are compatible if and only if they have the same odds ratios. Two distributions are the same if and only if they have the same odds ratios and the same marginal densities, which ICR is designed to achieve, i.e., (H2) and (H3). <cit.> has an algebraic check of the compatibility between f_1|2345 and f_2|1345 without iteration. Alternatively, ICR begins with an arbitrary q^(0)_2|345 and computes q^(2k+1)_12|345=f_1|2345q^(2k)_2|345 and q^(2k+2)_12|345=f_2|1345 q^(2k+1)_1|345, until they converge to π^(1,2)_12|345 and π^(2,1)_12|345, respectively. Regardless of the initial q^(0)_2|345 , π^(1,2)_12|345 =π^(2,1)_12|345 confirms compatibility. For d=3 and CSM: {f_1|23, f_2|13, f_3|12}, define C_i={f_i|-i v_-i} for i=1,2,3, where v_-i is any marginal density of x_-i. There are two updating orders: clockwise: C_1→ C_2 → C_3 → C_1 →⋯; and counter-clockwise: C_1→ C_3 → C_2 → C_1 →⋯. The three stationary distributions of clockwise sequence are π^(1,2,3)∈ C_3, π^(2,3,1)∈ C_1 and π^(3,1,2)∈ C_2, and they are called circularly-related, and ICR approximates them with the following iterations: q^(3k+1)=f_1|23 q^(3k)_23, q^(3k+2)=f_2|13 q^(3k+1)_13 q^(3k+3)=f_3|12 q^(3k+2)_12,k=0,1,2,… . The above marginalization-then-multiplications is designed to satisfy both (H1) and (H2). And ICR stops iterations when (H3): q^(3t)_1=q^(3t+1)_1 q^(3t+1)_2=q^(3t+2)_2 and q^(3t+2)_3=q^(3t+3)_3, are reached. Numerically, ICR stops when M(t)=I(q_1^(3t);q_1^(3t+1))+I(q_2^(3t+1);q_2^(3t+2)) +I(q_3^(3t+2);q_3^(3t+3))<10^-10. The following proposition follows from Theorem <ref>. For the clockwise updating order, the three sequences of joint densities converge, respectively, to their stationary distributions. That is, as k →∞, q^(3k+1)→π^(2,3,1)∈ C_1, q^(3k+2)→π^(3,1,2)∈ C_2 and q^(3k+3)→π^(1,2,3)∈ C_3 in K-L divergence. CSM: {f_1|23, f_2|13, f_3|12} are compatible if and only if π^(1,2,3) =π^(2,3,1)=π^(3,1,2). Let D={1,…,d} represent (x_1,…,x_d). Consider the conditional model: A={ f_a_i|-a_i: 1 ≤ i ≤ L}, with ⋃_i=1^L a_i= D. Again define C_a_i={f_a_i|-a_i v_-a_i}, where v_-a_i is any x_-a_i-marginal density. For a fixed updating order: C_a_1→ C_a_2→⋯→ C_a_L, the L circularly-related stationary distributions are P={π^(a_2,…,a_L,a_1)∈ C_a_1, π^(a_3,…,a_L,a_1,a_2)∈ C_a_2, …, π^(a_1,a_2,…,a_L)∈ C_a_L}. We start with q^(0)= f_a_L|-a_L w_-a_L∈ C_a_L. One cycle of ICR consists of L I-projections. For 1 ≤ i ≤ L , the conditional replacements for (H1) and (H2) are: q^(Lk+i)= f_a_i|-a_i q^(Lk+i-1)_-a_i∈ C_a_i, k=0,1,…. The iterations stop at t when q^(Lt+i)_a_i=q^(Lt+i-1)_a_i for every 1≤ i ≤ L, that is, (H3). Numerically, ∑_i=1^L I(q_a_i^(Lt+i);q_a_i^(Lt+i-1)) < 10^-10 is used to stop the iterations. If the L stationary distributions of P are the same, then the conditionals of A are compatible. Because π^(a_i+1,…,a_i-1,a_i)∈ C_a_i, the equalities of L stationary distributions of P imply ⋂_i=1^L C_a_i∅, hence compatible. §.§ ICR for unsaturated conditionally specified models (combinations of full and non-full conditionals) We shall name a CSM of exclusively full conditionals (Section <ref>), as a saturated CSM, otherwise, the CSM is unsaturated. To model data, unsaturated CSM is more realistic. But it is rarely discussed in the literature because the GS has a hard time sampling unsaturated CSM. A major difficulty for GS is finding the rules that identify the correct sequential orders to sample the non-full conditionals. PCGS <cit.> is proposed to circumvent such issues, and our algorithms will provide its theoretical justifications. The following rules are quite intuitive from the perspective of conditional replacement. Let an unsaturated CSM be represented by {f_a_i|b_i: 1 ≤ i ≤ L} and Δ=(⋃_i=1^L b_i) \ (⋃_i=1^L a_i). Also, define C_a_k={f_a_k|b_k v_b_k=q_a_k ∪ b_k}, where v_b_k is a marginal distribution of b_k. Conditional replacement (I-projection) of any q_a_i ∪ b_i∈ C_a_i onto C_a_j is permissible, written as C_a_i⇀ C_a_j, when the following two rules hold: * b_j ⊆ a_i ∪ b_i. * a_i ∩ b_j ∅. When C_a_i⇀ C_a_j, we define the ICR mapping ℙ: C_a_i→ C_a_j as ℙ(q_a_i ∪ b_i)= f_a_j|b_jq_b_j, where q_b_j is the x_b_j-marginal density of q_a_i∪ b_i. Marginalization of q_a_i ∪ b_i into q_b_j can only be done when Rule A holds. Next, we consider applying ℙ in cycle. Let (1^∗,…, L^∗), be a permutation of (1, …,L) with (L+1)^∗≡ 1^∗. If every ℙ mapping from C_a_i^∗ to C_a_(i+1)^∗ is permissible, then (a_1^∗,…, a_L^∗) is said to be a permissible updating cycle for {f_a_i|b_i: 1 ≤ i ≤ L}, and is denoted as ⟨⟨ a_1^∗,…, a_L^∗⟩⟩. [unconditioned ICR]Let the conditional model be A={ f_a_i|b_i: b_i ∅, 1 ≤ i ≤ L}, Δ = ∅, and ⋃_i=1^L a_i= Λ. When ⟨⟨ a_1^∗,…, a_L^∗⟩⟩, ICR will synthesize joint and marginal distributions of Λ. In addition, the I-projections begin with a marginal distribution, q^(0)_b_1^∗, use q^(1)=f_a_1^∗|b_1^∗ q^(0)_b_1^∗ to initiate the iterations, and ℙ^k (q^(1)) ∈ C_a_r^∗ where r=k L+1. For example, CSM: {f_12|3, f_4|123, f_3|124, f_5|1234} permits C_12⇀ C_4⇀ C_3⇀ C_5, but not C_5⇀ C_12 due to violation of Rule B; hence, Algorithm 2 cannot be applied. Had we changed f_12|3 to f_12|35, then ⟨⟨ 12,4,3,5⟩⟩,  and Algorithm <ref> will synthesize one joint, π^*, plus two marginals: {π_1235, π_1234}. When π^*_1235=π_1235 and π^*_1234=π_1234, the CSM is compatible. In the following, we consider unsaturated CSM that specifies conditional distributions, not joints. When Δ∅, it can be shown that Δ⊂ b_i for every i. Suppose that CSM {f_a_i|b_i: b_i∅, 1 ≤ i ≤ L} has a permissible updating cycle. If Δ∅, then Δ⊂ b_i for every i. Without loss of generality, let ⟨⟨ a_1,…, a_L⟩⟩ be a permissible updating cycle. When u∈Δ=(⋃_i=1^L b_i) \ (⋃_i=1^L a_i), u∈ b_j for some j, but u∉a_i for all i. Because of Rule A, we have u∈ b_j⊆ a_j-1∪ b_j-1. Hence, u must also belongs to b_j-1. By induction, u belongs to every b_i, which implies that Δ⊂ b_i for every i. [conditioned ICR] Let { f_a_i|b_i: b_i∅, 1 ≤ i ≤ L} be a conditional model having a permissible updating cycle. When Δ∅, ICR will synthesize densities that are conditioned on Δ. Let ⟨⟨ a_1,…, a_L⟩⟩ be a permissible updating cycle. The initial density is q^(1)_(a_1∪ b_1 ) \Δ | Δ= f_a_1 | b_1 q^(0)_(b_1\Δ) | Δ, where q^(0)_(b_1\Δ) | Δ is any conditional density of (b_1\Δ) given Δ. Every subsequent distribution produced by ICR is also conditioned on Δ. A simple example is { f_1|23, f_2|13}. Another example is {f_12|345, f_3|245} which permits ℙ mapping from C_3 onto C_12 conditioned on Δ={x_4, x_5}, and ℙ mapping from C_12 back onto C_3 conditioned on Δ. Using Algorithm <ref>, ICR synthesizes π_123|45^(3,12) and π_23|45^(12,3) from {f_12|345, f_3|245}. In the following, we concentrate on Algorithm <ref>, because most discussions apply to Algorithm <ref> with additional conditioning on Δ. For CSM: {f_a_i|b_i: b_i ∅, 1 ≤ i ≤ L}, let C_a_i={ f_a_i|b_iv_b_i}, and ⟨⟨ a_1,…, a_L⟩⟩ be a permissible updating cycle. A collection of densities, {π^(a_i+1,…,a_L,a_1,…,a_i)∈ C_a_i: 1 ≤ i ≤ L}, are said to be mutually stationary when ℙ(π^(a_i+1,…,a_L,a_1,…,a_i)) = π^(a_i+2,…,a_L,a_1,…,a_i+1) for every i, with (L+1)≡ 1. Mutually stationary distributions have the following properties: * Each set of {π^(a_i+1,…,a_L,a_1,…,a_i)} is associated with a specific permissible updating cycles. * Every π^(a_i+1,…,a_L,a_1,…,a_i) is stationary with respect to ℙ^L, i.e., ℙ^L(π^(a_i+1,…,a_L,a_1,…,a_i))=π^(a_i+1,…,a_L,a_1,…,a_i). * For saturated CSM, {π^(1,2),π^(2,1)}, {π^(1,2,3),π^(2,3,1),π^(3,1,2)} and {π^(a_2,…,a_L,a_1), π^(a_3,…,a_L,a_1,a_2), …, π^(a_1,a_2,…,a_L)} are mutually stationary. * Neighboring marginal densities satisfy π^(a_i+1,…,a_L,a_1,…,a_i)_b_i+1 = π^(a_i+2,…,a_L,a_1,…,a_i+1)_b_i+1, i.e., condition (H2) for every i. * For a compatible CSM having π^* as its joint, {π^*_a_i ∪ b_i: i≤ i ≤ L } satisfy ℙ (π^*_a_i ∪ b_i)= π^*_a_i+1∪ b_i+1, hence are mutually stationary. * If one π^(a_i+1,…,a_L,a_1,…,a_i) is known, the other L-1 stationary densities can be computed via mapping ℙ cyclically. For example, when π^(1,2) is known, π^(2,1)= ℙ(π^(1,2)). * Only for a saturated CSM, {π^(a_i+1,…,a_L,a_1,…,a_i)} are all joint densities. * The assertion of the existence of {π^(a_i+1,…,a_L,a_1,…,a_i)} is always true for totally positive CSM. Otherwise, the existence depends on whether π^(a_i+1,…,a_L,a_1,…,a_i)_b_i+1 is a bona fide marginal distribution of b_i+1 for 1≤ i ≤ L. Therefore, we first determine a permissible updating cycle, say ⟨⟨ a_1,…,a_L⟩⟩, then ICR will compute {π^(a_i+1,…,a_L,a_1,…,a_i)}. In the following proofs, the CSM is {f_a_i|b_i:1≤ i≤ L}, c_i=a_i∪ b_i, symbol x_c_i denote values of (x_j: j ∈ c_i) and C_a_i={h_c_i:h_a_i|b_i=f_a_i|b_i}. Assume C_a_i⇀ C_a_j is permissible. For any two densities h and g in C_a_i, mapping both by ℙ onto C_a_j decreases their K-L divergence. That is, I(h;g)>I(ℙ(h);ℙ(g)). First, we have I(h;g) = ∑_x_c_ih(x_c_i)logh(x_c_i)/g(x_c_i) = ∑_x_b_jh_b_j(x_b_j)∑_x_c_i\ b_jh_c_i\ b_j|b_j(x_c_i\ b_j|x_b_j)(logh_c_i\ b_j|b_j(x_c_i\ b_j|x_b_j)/g_c_i\ b_j|b_j(x_c_i\ b_j|x_b_j)+logh_b_j(x_b_j)/g_b_j(x_b_j)) = [∑_x_b_jh_b_j(x_b_j)I(h_c_i\ b_j|b_j(x_c_i\ b_j |x_b_j);g_c_i\ b_j|b_j(x_c_i\ b_j |x_b_j))] +I(h_b_j;g_b_j). It is easy to see that I(ℙ(h);ℙ(g))=I(h_b_j;g_b_j), because ℙ(h) and ℙ(g) have the same conditional density f_a_j|b_j. Hence, I(h;g)-I(ℙ(h);ℙ(g)) =∑_x_b_jh_b_j(x_b_j)I(h_c_i\ b_j|b_j(x_c_i\ b_j |x_b_j);g_c_i\ b_j|b_j(x_c_i\ b_j |x_b_j)), which is strictly positive, unless h_c_i\ b_j|b_j=g_c_i\ b_j|b_j for every x_b_j∈ b_j. The following theorem proves that the L sequences of densities produced by ICR converge respectively to mutually stationary densities. For a permissible updating cycle, say ⟨⟨ a_1,…,a_L⟩⟩, assume the corresponding L mutually stationary densities π^(a_i+1,…,a_L,a_1,…,a_i), 1 ≤ i ≤ L with (L+1)≡ 1, exist. For every 1 ≤ i ≤ L, the sequence of densities produced by Algorithm <ref>, {q^(kL+i)}, converge monotonically to π^(a_i+1,…,a_L,a_1,…,a_i) in K-L divergence, as k tends to ∞. Due to Lemma <ref>, we have, for 1 ≤ i ≤ L, I(π^(a_i+1,…,a_L,a_1,…,a_i);q^(kL+i)) > I(ℙ(π^(a_i+1,…,a_L,a_1,…,a_i));ℙ(q^(kL+i))) = I(π^(a_i+2,…,a_L,a_1,…,a_i+1);q^(kL+i+1)). After applying ℙ L times, ICR is back to C_a_i with ℙ^L (π^(a_i+1,…,a_L,a_1,…,a_i)) = π^(a_i+1,…,a_L,a_1,…,a_i), and ℙ^L (q^(kL+i))= q^((k+1)L+i). Thus, I(π^(a_i+1,…,a_L,a_1,…,a_i);q^(kL+i)) > I(ℙ^L (π^(a_i+1,…,a_L,a_1,…,a_i));ℙ^L (q^(kL+i))) = I(π^(a_i+1,…,a_L,a_1,…,a_i);q^((k+1)L+i)). Hence, I(π^(a_i+1,…,a_L,a_1,…,a_i);q^(kL+i)) decreases strictly to zero as k →∞. Because the decrease is monotonic, Algorithm <ref> may be stopped at (k+1)th cycle when I( q^(kL+i); ℙ^L (q^(kL+i))) < 10^-10 for any i. The following corollary provides theoretical justifications for PCGS. Let π be a joint distribution of 𝕏 and the CSM be {π_i|c_i:1≤ i≤ d}. Let (1^∗,…, d^∗) be a permutation of (1,…,d). When (a) i^∗∈ c_(i+1)^∗ and (b) c_(i+1)^∗\{i^∗}⊆ c_i^∗ for every i^∗ with (d+1)^∗≡ 1^∗, Algorithm <ref> will synthesis {π_i∪ c_i} from {π_i|c_i}. Moreover, PCGS updating in the order of x_1^∗→ x_2^∗→⋯→ x_d^∗→ x_1^∗→⋯ preserves stationarity. Another feature of Algorithm <ref> is that it can be applied to subgroups of conditionals after suitably partitioning the CSM; the rule is that a permissible updating cycle is identified within each subgroup. Depending on the CSM, ICR might be able to synthesize the outcomes of the subgroups—the many local models—into global joint distributions of 𝕏. We shall name such an approach “divide-then-ICR”. If we depict a CSM as a directed graph, GS requires a feedback loop that connects every variable of 𝕏. Therefore, the option of partitioning a CSM into subgroups is not available to GS or PCGS. § EXAMPLES [A simple case for divide-then-ICR]Consider the compatible unsaturated CSM, {f_3, f_1|23, f_2|13}, with f_3(0)=f_3(1)=1/2. <cit.> showed that none of the six permutations of (1,2,3) can lead PCGS to generate samples from the correct joint, because there is no permissible updating cycle; though, the model is sufficient. The joint π and its two conditional densities f_1|23 and f_2|13 are given as follows; moreover, we add an incompatible g_2|13 to pair with f_1|23 for showcasing our compatibility check: x_1 0 1 0 1 0 1 0 1 x_2 0 0 1 1 0 0 1 1 x_3 0 0 0 0 1 1 1 1 π_123 1/20 3/20 4/20 2/20 3/20 3/20 3/20 1/20 f_1|23 1/4 3/4 2/3 1/3 1/2 1/2 3/4 1/4 f_2|13 1/5 3/5 4/5 2/5 1/2 3/4 1/2 1/4 g_2|13 3/5 1/7 2/5 6/7 4/5 3/4 1/5 1/4 The CSM is first partitioned into {f_1|23, f_2|13} and {f_3}. Then ⟨⟨ 1,2⟩⟩, and Algorithm <ref> is applied with Δ ={3}. The initial distribution can be any q^(0)_2|3. The stationary distributions, π_12|3^(2,1) and π_12|3^(1,2), are computed via the following alternating ℙ mappings: ℙ(q^(2k))= q^(2k+1)_12|3≡ f_1|23 q^(2k)_2|3ℙ(q^(2k+1))=q^(2k+2)_12|3≡ f_2|13 q^(2k+1)_1|3, k=0,1,2,…. When q^(2t+1)_1|3=q^(2t)_1|3 and q^(2t+2)_2|3=q^(2t+1)_2|3, the iterations reaches stationarity. Numerically, convergence had occurred after seven cycles because M(t)=I(q^(2t)_1|3;q^(2t+1)_1|3)+I(q^(2t+1)_2|3;q^(2t+2)_2|3) drops from M(0)= 6.7× 10^-2 to M(6)=4.7× 10^-11. Hence, we have q^(13)_12|3 = π_12|3^(2,1) and q^(14)_12|3 =π_12|3^(1,2). To check compatibility, we use Π(t)=I(q^(2t)_12|3;q^(2t+1)_12|3)+ I(q^(2t+1)_12|3;q^(2t+2)_12|3); it drops from Π(0)=2.0× 10^-2 to Π(6)=5.6× 10^-11, which implies π^(2,1)_12|3=π^(1,2)_12|3 and compatibility. Furthermore, π^(2,1)_12|3 f_3 reproduces π_123. Now, consider the incompatible case: {f_1|23, g_2|13}. Because M(0)=2.7× 10^-1 drops to M(7)=2.1× 10^-11, Algorithm <ref> converges after eight cycles. But 0.92 <Π(t)<0.95, 0≤ t≤ 10 never decreases, hence the two stationary densities are different, which implies that f_1|23 and g_2|13 are not compatible. Let x_i have m_i categories for i=1,2,3. In order to match the joint and the x_3 marginal distributions, the number of unknowns is m_1 m_3+ m_2 m_3-2, but the number of equations is m_1 m_2 m_3+ 2 m_3-3. In terms of computational effort, Algorithm <ref> is much simpler than solving over-specified linear equations. [Permissible updating cycles of an unsaturated CSM]Consider a hypothetical example that an Asian nation applies to become a permanent member of the Security Council of United Nations (UN). America's vote is conditioned on Great Britain and France, but not on Russia and China. So its conditional distribution is a non-full conditional. Assume that France's vote would be conditioned on the other four nations, so its conditional distribution is a full conditional. Only the joint distribution can express the probability that this nation will not receive a veto. In Stage I, each conditional distribution can be estimated from this nation's voting history in UN and geopolitics; in Stage II joints will be synthesized from this unsaturated CSM. Here, we consider a hypothetical model whose f_i|a_i are derived from a randomly generated π(x_1,…,x_5), hence, compatible: A={f_1|2345,f_2|1345,f_3|145,f_4|15,f_5|1234}. There are only two out of 5!=120 updating cycles that are permissible: ⟨⟨ 5,4,3,2,1⟩⟩ and ⟨⟨ 5,1,4,3,2⟩⟩. Therefore, partition of CSM is not needed. For ⟨⟨ 5,4,3,2,1⟩⟩, one cycle of Algorithm <ref> is as follows: q^(5t+1)=f_5|1234q_1234^(5t), q_145^(5t+2)=f_4|15q_15^(5t+1),q_1345^(5t+3)=f_3|145q_145^(5t+2), q^(5t+4)=f_2|1345q_1345^(5t+3), q^(5t+5)=f_1|2345q_2345^(5t+4). Every I-projection does two operations: marginalization then multiplication. For some non-full conditionals, marginalization may not be required. Among the above five steps, q^(5t+2)_145 and q^(5t+3)_1345 are, respectively, multiplied directly into f_3|145 and f_2|1345 to form q_1345^(5t+3) and q^(5t+4). When no marginalization is performed, the q^(k) will not conflict with the conditional models. Stop ICR when q^(5t+1)_5 =q^(5t)_5, q^(5t+1)_234 =q^(5t+4)_234, and q^(5t+4)_1 =q^(5t+5)_1. Numerically, ICR iterations will be stopped at t when M(t)=I(q_5^(5t);q_5^(5t+1))+I(q_234^(5t+1);q_234^(5t+4))+I(q_1^(5t+4);q_1^(5t+5)) < 10^-10. When compatibility is in question, you compute the following Π(t): Π(t)=I(q^(5t);q^(5t+1))+I(q^(5t+1);q^(5t+4))+I(q^(5t+4);q^(5t+5)). If it drops to 0, the CSM is compatible, otherwise, not. The stopping criterion for the other permissible cycle: ⟨⟨ 5,1,4,3,2⟩⟩ is M(t)=I(q_5^(5t);q_5^(5t+1))+I(q_1^(5t+1);q_1^(5t+2))+I(q_234^(5t+2);q_234^(5t+5)). For both updating cycles, the randomly generated joint distribution is recovered. <cit.> considered the unsaturated CSM: {f_1|2345,f_2|345, f_3|145,f_4|25,f_5|13}; they used a procedure that is equivalent to recursive factorization to derive the joint density. We illustrate divide-then-ICR here. First, divide the CSM into {f_1|2345,f_2|345,f_3|145}, {f_4|25}, and {f_5|13} because ⟨⟨ 3,2,1⟩⟩; ⟨⟨ 123,4⟩⟩ and ⟨⟨ 1234,5⟩⟩ hold. * Phase 1: Algorithm <ref> produces π_123|45^(3,2,1), π_13|45^(2,1,3), π_23|45^(1,3,2) condition on {4,5}. To build a joint, only π_123|45^(3,2,1) needs to be used in the next phase. * Phase 2: Algorithm <ref> uses {π_123|45^(3,2,1), f_4|25} to build π_1234|5^(4,123) and π_24|5^(123,4) conditioned on {5}. * Phase 3: Algorithm <ref> uses {π_1234|5^(4,123), f_5|13} to build a joint π_12345^(5,1234) and a marginal π_135^(1234,5). When the CSM is compatible, π_12345^(5,1234) is the joint producing the CSM. The synthesis is written as ⟨⟨ ⟨⟨ ⟨⟨ 1,2,3⟩⟩ , 4⟩⟩ ,5⟩⟩. [Embedding a CSM like a jigsaw puzzle]Let the CSM be {f_2|1, f_3|2, f_1|3, f_4|123, f_5|1246, f_6|1245, f_3^*|12456, f_6^*|12345}, where 3^* and 6^* indicate the variables appear twice in the model. We divide CSM into 4 subgroups: {f_2|1, f_3|2, f_1|3}, { f_4|123}, { f_5|1246, f_6|1245}, { f_3^*|12456, f_6^*|12345}, and use Algorithm 2 or 3 to consolidate the conditionals in each group into: marginals: {π_12, π_23, π_13}, and conditionals: f_56|124, f_36|1245, respectively. In order to incorporate f_4|123, we need the marginal π_123 which is missing from the CSM, so the CSM is not sufficient. Recall the three-way log-linear model: logπ_ijk= μ +μ^1_i + μ^2_j+ μ^3_k + μ^12_ij+ μ^23_jk+μ^13_ik+ μ^123_ijk. In order to obtain π_123, an assumption about the three-way iterations is required. Either μ^123_ijk= 0 or μ^123_ijk= constant is most common; other possibilities may need some subject-matter knowledge. Once μ^123 are settled, use iterative proportional fitting algorithm (IPF) along with {π_12, π_23, π_13} to obtain π_123. Combining π_123 and f_4|123 gives π_1234, which will be marginalized into π_124 to be combined with f_56|124 to form π_12456. This distribution can be reduced to π_1245 to be matched with f_36|1245 to form a joint distribution π_123456. [A sticky conditional model for GS]Consider the following compatible conditionals: x_1 0 1 0 1 0 1 x_2 0 0 1 1 2 2 f_1|2 100000/100001 1/100001 100000/100001 1/100001 7/8 1/8 f_2|1 200000/700007 2/8 500000/700007 5/8 7/700007 1/8 , which are derived from the following joint density: π=(200000/700015,2/700015,500000/700015,5/700015, 7/700015,1/700015). It would be difficult for GS to explore the support because the concentration of probabilities at (0,0) and (0,1). Here we show that ICR will not be hindered by the sticky cells. For ICR, M(4)=3.8× 10^-11 indicates convergence after five rounds of ICR. The mutual K-L divergence between π^(1,2) and π^(2,1) is Π(4)=3.9× 10^-11, thus confirms that the model is compatible, and π is reproduced. Next, GS is used to produce 5 batches of size 1,000,000 samples from {f_1|2,f_2|1}; the burn-in is set at 100,000. Let g^(s), s=1,…,5, be the empirical pdf with g^(1) based on the first 1,000,000 samples, and the other g^(i)s based on 4 increments of 1,000,000 additional samples. The accuracy of GS are measured by discrepancies, I(g^(s);π)+I(π;g^(s)), s=1,…,5. Last, let T_1 and T_2 be the transition matrices based on f_1|2 and f_2|1, respectively. The power method uses the averages of the six rows of (T_1T_2)^n as the approximations to π. Let p^(t) be the distribution by power-method approximations, which stops at t=5 with I(p^(5);π)+I(π;p^(5))<10^-10, where π is the target joint distribution. In Figure <ref>, ICR converges a bit faster than the power method, while the additional 4 million GS samples shows little improvement. In terms of efficiency, CPU times (second) of ICR, power method, and GS are 0.006, 0.019 and 114, respectively. The CPU time consumed by GS makes it impractical to deal with problems having sticky issue <cit.>, also see <cit.> for a sticky Gaussian model. Sticky issue slows down sample-based exploitation of the support, but it dose not affect distribution-based ICR or power method. [Conditional models with disjoint support]Consider a compatible model A_1={f_1|234,f_2|134,f_3|124,f_4|123} and an incompatible model A_2={f_1|234,f_2|134, f_3|124, g_4|123}, whose conditional densities are detailed as follows: x_1 0 0 1 1 0 0 1 1 x_2 0 1 0 1 0 1 0 1 x_3 0 0 1 1 0 0 1 1 x_4 0 0 0 0 1 1 1 1 f_1|234 1 1 1 1 1 1 1 1 f_2|134 1/8 7/8 2/5 3/5 5/12 7/12 1/5 4/5 f_3|124 1 1 1 1 1 1 1 1 f_4|123 1/6 1/2 2/3 3/7 5/6 1/2 1/3 4/7 g_4|123 1/6 3/10 2/3 3/7 5/6 7/10 1/3 4/7 Their support S is the union of two disjoint regions S_1={(0,0,0,0),(0,1,0,0),(0,0,0,1), (0,1,0,1)} and S_2={(1,0,1,0),(1,1,1,0), (1,0,1,1),(1,1,1,1)}. We will use three different marginal distributions: u, v and w to show how they affect the stationary distributions: x_1 x_2 x_3 x_4 u v w 0 0 0 0 1/8 1/20 1/15 0 1 0 0 1/8 3/20 2/15 0 0 0 1 1/8 2/20 3/15 0 1 0 1 1/8 4/20 4/15 4|c|total of S_1 1/2 1/2 2/3 1 0 1 0 1/8 1/10 1/15 1 1 1 0 1/8 1/10 1/15 1 0 1 1 1/8 1/10 1/15 1 1 1 1 1/8 2/10 2/15 4|c|total of S_2 1/2 1/2 1/3 . Notice that u is the uniform distribution, and ∑_x∈ S_iv(x)=∑_x∈ S_iu(x)≠∑_x∈ S_iw(x). Let p^(0)=f_4|123u_123, q^(0)=f_4|123v_123 and r^(0)=f_4|123w_123 be the initial distributions of ICR, which uses ⟨⟨ 1,2,3,4⟩⟩ as the updating cycle. The three sequences of joints are, respectively, p^(4k+i)=f_i|-ip_-i^(4k+i-1), q^(4k+i)=f_i|-iq_-i^(4k+i-1),r^(4k+i)=f_i|-ir_-i^(4k+i-1), where i=1,…,4. The convergence of p^(k) is determined by M_p(k)=I(p_1^(4k);p_1^(4k+1))+I(p_2^(4k+1);p_2^(4k+2))+I(p_3^(4k+2);p_3^(4k+3))+ I(p_4^(4k+3);p_4^(4k+4)). We stop ICR at time t_p when M_p(t_p)<10^-10. The M_q(k) and M_r(k) are similarly defined, so are the stopping times t_q and t_r.   Figure <ref> plots M_p(t), M_q(t) and M_r(t) vs. t, and they all indicate fast convergence with t_p=5, t_q=4 and t_r=4, respectively. After convergence, we obtain three batches of stationary joint distributions: {p^(4t_p+i)}, {q^(4t_q+i)}, {r^(4t_r+i)}, where i=1,…,4 are associated with C_i. Compatibility is equivalent to within-group consistency, whose discrepancy is measured by Π_p(t)=I(p^(4t);p^(4t+1))+I(p^(4t+1);p^(4t+2))+I(p^(4t+2);p^(4t+3))+I(p^(4t+3);p^(4t+4)). The resulting Π_p(5)=3.6× 10^-12, Π_q(4)=9.4× 10^-11 and Π_r(4)=1.3× 10^-10 indicate that A_1 is a compatible CSM, no matter which initial distribution is used. Uniqueness of stationary distributions is based on within-C_4 consistency; we need only to compare among p^(4t_p+4), q^(4t_q+4) and r^(4t_r+4): I(p^(4t_p+4);q^(4t_p+4))+I(q^(4t_q+4);p^(4t_p+4)) = 1.2× 10^-12, I(p^(4t_p+4);r^(4t_r+4))+I(r^(4t_r+4);p^(4t_p+4)) = 0.1155, I(q^(4t_q+4);r^(4t_r+4))+I(r^(4t_r+4);q^(4t_q+4)) = 0.1155. The above informs us that p^(4t_p+4)=q^(4t_q+4), p^(4t_p+4) r^(4t_r+4), and q^(4t_q+4) r^(4t_r+4). Therefore, stationary distributions indeed depend of the initial distributions, which is expected for reducible Markov chain. Next, ICR with initial distributions u, v and w is applied for A_2. The M_p(t), M_q(t) and M_r(t) are plotted against t in lower panel of Figure <ref>. The left plot indicates fast convergence also for incompatible CSM, with M_p(4)=1.0× 10^-12, M_q(4)=2.2× 10^-13, and M_r(3)=7.2× 10^-11. To check compatibility, we calculate Π_x. Because Π_p(4)=0.0107, Π_q(4)=0.0107 and Π_r(3)=0.0143, A_2 is deemed incompatible. To see the effect of initial distributions, we compute the following K-L divergences: I(p^(20);q^(20))+I(q^(20);p^(20)) = 1.57× 10^-15, I(p^(20);r^(16))+I(r^(16);p^(20)) = 0.1155, I(q^(20);r^(16))+I(r^(16);q^(20)) = 0.1155. We see that the difference between u and v does not change their stationary distributions. In summary, this example shows that * Convergence of ICR is not affected by the compatibility of the model; * Compatibility is not affected by the choice of the initial distribution, i.e., our compatibility check is independent of the choice of the initial distribution; and * It is the probability assigned to each disjoint support, (S_i), not the detailed distribution over S_i, that determines the stationary distribution. When the support is partitioned, (S_i) must be carefully guided by subject-matter knowledge; the (S_i) may also be adjusted iteratively until the joint distribution is more consistent with data. This flexibility of using initial distribution to fine tune the stationary distribution is not available to irreducible CSM. § DISCUSSIONS §.§ Differences between ICR and GS Consider the saturated CSM {f_i|-i: 1 ≤ i ≤ d}; let T_i be the transition matrix of f_i|-i, and q be a vector representing a joint pdf. It can be shown that qT_i=f_i|-i q_-i≡ℙ(q). That is, q transitioned by T_i, I-projection of q onto C_i, and replacing the (x_i|-x_i)-conditional of q by f_i|-i are the same thing. But the commonality ends here. We choose conditional replacement because it is the easiest to modify for non-full conditionals. Also, Rule A of Algorithm <ref> is intuitively necessary because it is the only circumstance under which conditional replacement can be executed. GS is justified by Markov chain which cannot be applied to incompatible or unsaturated CSM. A popular remedy is to expand every non-full conditionals into a full conditional. But such a practice may blind GS to use an impermissible updating cycle, and cause GS to sample from the distribution that is not the target. We show that identification of the permissible updating cycles is critical for the execution of ICR, while GS does not need to pay attention to it, because Rule A and Rule B are automatically satisfied for full conditionals. §.§ Use Gibbs ensemble to find the optimal joint distribution Graphically, a conditional model is depicted by a cyclic directed graph with feedback loop. <cit.> call such a graphical model a dependency network, and their objective is to synthesize one joint distribution from a saturated CSM derived empirically, and without regard to compatibility. They used GS based on incompatible full conditionals to synthesize, and coined the term pseudo-Gibbs sampler (PGS). They claimed that different updating cycles of PGS will converge to nearly identical stationary distributions when the data are large; but, statisticians have refuted such a claim. For example, <cit.> stated “the simulations (imputations) never converge to a single distribution, rather the distribution depends upon the order of the updating and when the updating stopped.” <cit.> also stated “Gibbs samplers based on a set of densities that are not compatible result in Markov chains that are null, that is, they are either null recurrent or transient.” In fact, <cit.> stated that PGS's “theoretical properties are largely unknown and no doubt considerable caution must be exercised.” <cit.> called the stationary distributions of PGS, pseudo-Gibbs distributions (PGD). According to <cit.>, incompatible CSM faces the multiplicity problem: there are many different models that have about the same merit. He suggests that “aggregating over a large set of competing models can reduce the nonuniqueness, while improving accuracy.” In addition, the resulting model “is also more stable.” <cit.> <cit.> named the collection of d! PGDs of a saturated CSM as the Gibbs ensemble, and proposed to use a weighed sum of PGDs as the final model. Building the ensemble requires running d! long chains of Gibbs sampling, which makes the computational burden heavy, if not impossible for large d. For instance, <cit.> used two chains of 1,000,000 GS samples each to approximate π^(1,2) and π^(2,1), even though {f_1|2, f_2|1} are two 2 × 2 conditionals. From d full conditionals, ICR produces d PGDs in one batch, hence, reduces the computational burden by one order. <cit.> considered only ensemble for saturated CSM, because PGS cannot sample unsaturated CSM. As we have shown, the size of the Gibbs ensemble of an unsaturated CSM is considerably less than d!, because only permissible updating cycles need to be entertained. This understanding makes the computations for unsaturated CSM less prohibitive. In Example <ref>, {f_1|2345,f_2|1345,f_3|145,f_4|15,f_5|1234} have only six stationary distributions in two batches, not 120. Gibbs ensemble optimizes by computing a weighted mixture of these six distributions. The deviance of the mixture relative to the CSM is smaller than every individual PGD. Different deviance measures, such as K-L divergence, Pearson chi-square X^2, and Freeman-Turkey F^2 have been considered; therefore, the optimal joint will be deviance-dependent. §.§ Comparisons between the power method and ICR Back to {f_i|-i: 1≤ i ≤ d}, let T_i be the transition matrix of f_i|-i, and T=T_1⋯ T_d. The power method uses the row average of T^k as the stationary distribution for T. But in practice, the power method often encounters a sparse T of enormous size when d is large; thus it is not practical. ICR computes at least as fast as the power method, and it has the following computational advantages: * One cycle of ICR computes d stationary densities, while the power method requires d sequences. For d=3, ICR produces mutually stationary joints: π^(1,2,3),π^(2,3,1), and π^(3,1,2), whereas the power method needs to evaluate 3 separate sequences: (T_1T_2T_3)^k, (T_2T_3T_1)^k and (T_3T_1T_2)^k until convergence. * The size of 2-dimensional T increases exponentially with d, while ICR works with d-dimensional arrays. * The power method cannot be applied to unsaturated conditional models because the transition matrices of full and of non-full conditionals have different sizes. * When T is a reducible matrix, power method often fails. §.§ Method of alternating projections (MAP) Traditionally, GS considers T=T_1⋯ T_d as one entity; hence, the effect of individual T_i becomes latent. However, entertaining T_i separately can gain operational advantage, see, for example, <cit.> and <cit.>, who used the method of alternating projection (MAP) of <cit.> to find “minimal sufficient subfields”. Also, it should not be a surprise that our conditional replacement mapping ℙ onto C_i is Burkholder's conditional expectation given C_i. More recently, <cit.> show that the GS is a MAP, when every T_i is considered separately. When the saturated CSM is compatible, the proof in <cit.> guarantees the convergence of ICR in norm, but not in K-L divergence. However, CSM often encounter incompatible models having non-full conditionals. Algorithm <ref> is a MAP, but it is different from ordinary MAP in the following aspects: * MAP is commonly used to approximate one fixed point in ⋂_i C_i, see <cit.>. Here, we show that MAP can also be used to pursue multiple fixed points, one in each C_i. * MAP usually projects onto closed subsets of the same space, say H={all the joint distributions over S(f_i|-i) }. For a saturated CSM, every C_i is a subset of H. But the C_i defined by a non-full conditional is not a subset of H, but of a different space. Examples here show that MAP can be applied to closed subsets of different spaces, as long as the projections respect the hierarchy between spaces, i.e., Rule A. Because of (a) and (b) above, a new concept of stationarity is needed; mutual stationarity is better defined collectively, not individually. Figure <ref> illustrates such pursuits of ℙ with d=3. Distributions q^(3k+i) within each C_i converge monotonically to stationary distribution π^(j,k,i), and ℙ(π^(j,k,i))= π^(k,i,j). Minimum context and little background knowledge are required to understand the replacement of conditional distribution, and the simple proof of Theorem <ref>. Our goal is to make ICR, as an algorithm, easily understood and appreciated by statisticians and data scientists, who have little familiarity with Markov chain theory or Hilbert space. Another popular MAP algorithm is IPF, which hardly refer to Hilbert space, orthogonal projection or conditional expectation; instead, it is described as replacing marginal densities iteratively, see <cit.> and <cit.>. Finally, much of MAP has been dealing with continuous functions over convex domains. The algorithm, “divide-then-ICR,” and the proof of Theorem <ref> can be easily carried over to continuous distributions provided the integrals are finite. Marginalization of a continuous density is the computatonal obstacle of ICR. <cit.> studied alternating I-projection of a regular Gaussian distribution onto the intersection of spaces characterized by Gaussian conditionals (a C_i defined by a full conditional) and Gaussian marginal distribution (another C_j defined by a non-full conditional). For Gaussian distributions, marginalization is straightforward. Part of his algorithm <cit.> is similar to ICR. His model placed restrictions on the conditionals that guarantee compatibility (C_i ∩ C_j ∅), hence, has unique stationarity; he did not consider incompatible cases or discrete densities. § CONCLUSION When the number of variables is large and the data size is relatively small, subjective or objective variable selection is necessary, hence, unsaturated conditional models are inevitable. However, in the past, only saturated conditional models had been considered—<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>—due to lack of computational tools. On the other front, <cit.> used linear equations/algebra to check compatibility; their methods quickly reach the curse of dimension. ICR is invented to fit unsaturated conditional models, and to check their compatibility using computing, rather than algebra. ICR provides the channel to apply computing power to solve issues of conditional modeling. It seems to us that ICR is the right choice for CSM because it is multiplying by the transition matrix (see Section  <ref>), doing I-projection (see Section <ref>), and performing conditional expectation (see Section <ref>), at the same time. ICR, along with “divide-then-ICR” and parallelization, can efficiently compute all of the mutually stationary distributions, which are called the Gibbs ensemble. We are in agreement with <cit.> and <cit.> that a fair-minded mixture of the Gibbs ensemble is a sensible approach in Stage III to resolve the multiplicity problem. Any practical algorithm must be easy to scale and requires little expertise to tune. ICR and the ensemble optimization meet both criteria. § APPENDIX §.§ The proof of Pythagoras equality Because τ∈ C_1, it can be written as τ=f_1|2τ_2, and the K-L divergence between q and τ is I(q;τ) = ∑_i,j q(i,j) logq(i,j)/f_1|2(i|j)τ_2(j) = ∑_i,j q(i,j) logq(i,j)/f_1|2(i|j)q_2(j)+ ∑_i,j q(i,j) logf_1|2(i|j)q_2(j)/f_1|2(i|j)τ_2(j) = I(q; f_1|2q_2) + I(q_2;τ_2)=I(q; f_1|2q_2) + I(f_1|2q_2;τ), because of I(q_2;τ_2) = ∑_i f_1|2(i|j) ∑_j q_2(j) logq_2(j)/τ_2(j) = ∑_i,jf_1|2(i|j)q_2(j) logq_2(j)/τ_2(j) = ∑_i,jf_1|2(i|j)q_2(j) logf_1|2(i|j)q_2(j)/f_1|2(i|j)τ_2(j)=I(f_1|2q_2;τ). 40 [Arnold et al.(1996)]Arnold1996 Arnold B. C., Castillo E., & Sarabia, J. M. (1996). Specification of distributions by combinations of marginal and conditional distributions. Statistics & Probability Letters, 26, 153–157. [Arnold et al.(2002)]Arnold2002 Arnold B. C., Castillo E., & Sarabia, J. M. (2002). Exact and near compatibility of discrete conditional distributions. Computational Statistics and Data Analysis, 40, 231–252. [Arnold et al.(2004)]Arnold2004 Arnold B. C., Castillo E., & Sarabia, J. M. (2004). Compatibility of partial or complete conditional probability specifications. Journal of Statistical Planning and Inference, 123, 133–159. [Besag(1974)]Besag1974 Besag J. (1974). Spatial interaction and the statistical analysis of lattice systems (with discussion). Journal of the Royal Statistical Society: Series B, 36, 192–236. [Besag(2001)]Besag2001 Besag J. (2001). Comment on “Conditionally specified distributions: an introduction. Statistical Science, 16, 265–267. [Breiman(2001)]Breiman2001 Breiman L. (2001). Statistical modeling: the two cultures. Statistical Science, 16, 199–215. [Burkholder and Chow(1961)]Burkholder1961 Burkholder D. L., & Chow Y. S. (1961). Iterates of conditional expection operators. Proceedings of the American Mathematical Society, 12, 490–495. [Burkholder(1962)]Burkholder1962 Burkholder D. L. (1962). Successive conditional expectations of an integrable function. Annals of Mathematical Statistics, 33, 887–893. [Casella(1996)]Casella1996 Casella G. (1996). Statistical inference and Monte Carlo algorithms. Test, 5, 249–344. [Chen et al.(2013)]Chen2013 Chen S.-H., Ip E. H., & Wang, Y. J. (2013). Gibbs ensembles for incompatible dependency networks. WIREs Computational Statistics, 5, 478–485. [Chen and Ip(2015)]Chen2015 Chen S.-H., & Ip, E. H. (2015). Behaviour of the Gibbs sampler when conditional distributions are potentially incompatible. Journal of Statistical Computation and Simulation, 85, 3266–3275. [Cramer(1998)]Cramer1998 Cramer E. (1998). Conditional iterative proportional fitting for Gaussian distributions. Journal of Multivariate Analysis, 65, 261–276. [Darroch and Ratcliff(1972)]Darroch1972 Darroch J. N., & Ratcliff, D. (1972). Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43, 1470–1480. [Diaconis et al.(2010)]Diaconis2010 Diaconis P., Khare K., & Saloff-Coste, L. (2010). Stochastic alternating projections. Illinois Journal of Mathematics, 54, 963–979. [Gelman and Raghunathan(2001)]Gelman2001 Gelman A., & Raghunathan T. E. (2001). Comment on “Conditionally specified distributions: an introduction”. Statistical Science, 16, 268–269. [Heckerman et al.(2000)]Heckerman2000 Heckerman D., Chickering D. M., Meek C., Rounthwaite R., & Kadie C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1, 49–75. [Kaiser and Cressie(2000)]Kaiser2000 Kaiser M. S., & Cressie N. (2000). The construction of multivariate distributions from Markov random field. Journal of Multivariate Analysis, 73, 199–220. [Kuo and Wang(2018)]Kuo2018 Kuo, K.-L., & Wang, Y. J. (2018). Simulating conditionally specified models. Journal of Multivariate Analysis, 167, 171–180. [Kuo and Wang(2019)]Kuo2019 Kuo K.-L., & Wang, Y. J. (2019). Pseudo-Gibbs sampler for discrete conditional distributions. Annals of the Institute of Statistical Mathematics, 71, 93–105. [Raghunathan et al.(2001)]Raghunathan2001 Raghunathan T. E., Lepkowksi J. M., van Hoewyk J., & Solenberger, P. (2001). A multivariate technique for multiply imputing missing values using a sequence of regression models. Survey Methodology, 27, 85–95. [Smith and Roberts(1993)]Smith1993 Smith A. F. M., & Roberts G. O. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B, 55, 3–23. [van Buuren(2007)]vanBuuren2007 van Buuren S. (2007). Multiple imputation of discrete and continuous data by fully conditional specification. Statistical Methods in Medical Research, 16, 219–242. [van Dyk and Park(2008)]vanDyk2008 van Dyk D. A., & Park T. (2008). Partially collapsed Gibbs samplers: theory and methods. Journal of the American Statistical Association, 103, 790–796. [von Neumann(1950)]Neumann1950 von Neumann J. (1950). Functional Operators, Vol. 2. Princeton: Princeton University Press. [Wang(1993)]Wang1993 Wang Y. J. (1993). Construction of continous bivariate density fuctions. Statistica Sinica, 3, 173-187. [Wang and Ip(2008)]Wang2008 Wang Y. J., & Ip E. H. (2008). Conditionally specified continuous distributions. Biometrika, 95, 735–746. [Williams(2001)]Williams2001 Williams D. (2001). Weighing the Odds, Cambridge: Cambridge University Press.
http://arxiv.org/abs/2307.02348v1
20230705150642
Quantum Limits of Position and Polarizability Estimation in the Optical Near Field
[ "Lukas Kienesberger", "Thomas Juffmann", "Stefan Nimmrichter" ]
quant-ph
[ "quant-ph", "physics.optics" ]
University of Vienna, Faculty of Physics, VCQ, A-1090 Vienna, Austria University of Vienna, Max Perutz Laboratories, Department of Structural and Computational Biology, A-1030 Vienna, Austria Department of Physics, Ludwig-Maximilians-Universität München, Theresienstraße 37, D-80333 München, Germany University of Vienna, Faculty of Physics, VCQ, A-1090 Vienna, Austria University of Vienna, Max Perutz Laboratories, Department of Structural and Computational Biology, A-1030 Vienna, Austria Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, Germany Optical near fields are at the heart of various applications in sensing and imaging. We investigate dipole scattering as a parameter estimation problem and show that optical near-fields carry more information about the location and the polarizability of the scatterer than the respective far fields. This increase in information originates from and occurs simultaneously with the scattering process itself. Our calculations also yield the far-field localization limit for dipoles in free space. Quantum Limits of Position and Polarizability Estimation in the Optical Near Field Stefan Nimmrichter August 1, 2023 ================================================================================== Near fields have applications ranging from nanofabrication <cit.> to sensing <cit.> and imaging <cit.>. They enable enhanced, highly localized interactions and label-free imaging at a spatial resolution beyond the diffraction limit, with illumination wavelengths from the optical to the radio-frequency range. With recent advances in far-field label-free super-resolution imaging <cit.>, the question arises whether there is a fundamental advantage of operating in the near-field regime. We approach this question as a parameter estimation task. In optical imaging, information about a parameter of interest is encoded into the state of the probing light. It is quantified by the quantum Fisher information (QFI). Information retrieved in a specific measurement on that probe state is quantified by the Fisher information (FI) <cit.>. These two quantities determine the (Quantum) Cramér-Rao bounds (QCRB) on the minimum parameter estimation variance achievable for a specific probe state or measurement, respectively. This parameter estimation framework at hand, one can then analyze and improve measurement techniques in practice. The localization precision was optimized in fluorescence microscopy <cit.> and interferometric scattering microscopy <cit.>, the phase estimation precision in phase microscopy and holography <cit.>, and the lifetime estimation precision in fluorescence lifetime microscopy <cit.>. One could also optimize measurements in challenging scenarios, e.g., when an object of interest is embedded in a highly scattering medium <cit.>. These ideas were recently extended to electron microscopy, where dose-induced damage limits the number of probe interactions and the information obtained per electron becomes a crucial parameter <cit.>. Here, we consider optical microscopy and calculate the (Q)FI and (Q)CRB regarding the position and polarizability of a scatterer both in the near and far field. The (Q)CRB on localization are relevant for tracking <cit.> and imaging, while those on polarizability are relevant for sizing and mass photometry applications <cit.>. We first describe the scattering process classically and show that measurements in the optical near field, at distances closer than the probe wavelength, can lead to significantly lower (i.e. better) CRB than in the far field. We find that, while the ideally achievable uncertainties for position and polarizability estimation are constant in the far field, they can improve, respectively, with the third and the second power of the detector distance in the near field. We then solve the time-dependent quantum scattering problem and find that the QFI is significantly enhanced while the probe-sample interaction takes place. The far-field QCRB bounds neither the CRB nor the QCRB in the near field, and near-field measurements can therefore be more precise than any (coherent) far-field measurement performed with the same probe light. Dipole scattering model.—We consider the setting sketched in Fig. <ref>: a dipole scatterer located at _0 is illuminated by coherent, linearly polarized, and monochromatic light propagating along the z-axis with wave vector = _z. Its amplitude is given by E^ in and its polarization by _x. The scatterer's linear response to this field is characterized by a scalar dipole polarizability χ_0. The task is to estimate χ_0 and the position _0 = (x_0,y_0,z_0) by measuring the light in a position-resolving photo-detector placed in the near or far field of the scatterer. We will approach the estimation task in two ways corresponding to different degrees of scrutiny. The first approach is phenomenological: we treat the incident light as a plane-wave field of wavelength = 2π/, ^ in (,t) = E^ in_x e^i(z-ct) and ^ in = _z ×^ in/c, and the scatterer as the classical induced Hertz dipole (t) = 2ϵ_0 χ_0 ^ in (_0,t) that oscillates at the light frequency = c. Information about the scatterer's position _0 and polarizability χ_0 is broadcast to an ideal position-resolving (and backaction-free) photodetector through dipole radiation, E^ sc( r, t) = ^3 χ_0 E^ in/2π e^i (ρ+z_0-ct)[ (_×_x) ×_/ρ. . + _x - 3_(_x ·_)/(ρ)^3 (iρ-1) ], B^ sc( r, t) = i^3 χ_0 E^ in/2π c e^i (ρ+z_0-ct)_×_x/(ρ)^2 (1 - iρ), where we define = r-r_0 and _ = /ρ. We consider a planar detector surface in the z=Z plane here; see Sec. I in <cit.> for a hemispherical detector of radius R. Our second approach is a dynamical scattering model: the dipole is a quantum harmonic oscillator of frequency ω_0 aligned with the electric field of the incident light, which is a long Gaussian pulse occupying a narrow band Δω∼ 1/τ around < ω_0. In the multipolar gauge <cit.>, the light-matter coupling reduces to the well-known dipole Hamiltonian (see Sec. II in <cit.>), _I = ( + ^) ∑_,√(ħ c k/2ϵ_0 L^3)d_0 _x·_/1+(ka_0)^2[ e^i·_0/i_ + h.c.], with the dipole's ladder operator, d_0 ∈ℝ its strength parameter, and _ the bosonic operators associated to plane-wave modes of wave vectors and transverse polarizations _⊥ (=1,2) in the mode volume L^3. We alleviate high-frequency divergences arising from an ideal point dipole by introducing a regularisation parameter a_0 > 0 <cit.>. It describes the spatial extension of an exponentially localized polarization density, suppressing the coupling to wave numbers greater than 1/a_0. Corrections to the dipole approximation are negligible as long as a_0 is much smaller than the populated wavelengths. Note that we do not truncate the dipole to a two-level system, as this would complicate the calculation and is known to cause problems with gauge invariance <cit.>. Classical near-field CRB.—We start with the phenomenological model and evaluate how well an ideal detector can resolve the parameters = (χ_0,x_0,y_0,z_0) of the scatterer based on the absorbed radiation. A single detector pixel of area d A at position sees a light intensity I (,) = _· (,), with _ the unit vector orthogonal to the pixel surface and (,) the time-averaged Poynting vector of the total field at ; it depends on through the dipole field (<ref>). The intensity is static in the case of plane-wave illumination, which yields an average of d n̅ (,) = I (,) τ d A /ħω detected photons over a measurement time window τ. According to a Poissonian detection model <cit.>, the likelihood to count n photons in each pixel is given by p(n|,) = e^-d n̅ (,) [d n̅ (,)]^n /n!. For narrow-band pulses with a slowly varying input amplitude E^ in (t) = E^ in g(t), we can neglect the variation of the scattered fields (<ref>) over the pulse spectrum. Photodetection then realizes an inhomogeneous Poisson process described by the same likelihood, but with τ = ∫ d t |g(t)|^2. Given that the detector pixels correspond to independent, non-overlapping spatial field modes, their combined measurement amounts to taking the product of the individual likelihoods. As the measurement statistics depends on the scatterer's parameters , we can estimate these parameters from a sample of measurement data. The achievable precision based on large data samples obeys the CRB, determined by the FI matrix ℐ_j ℓ() = τ/ħω∫_pixels1/I(,)∂ I(,)/∂θ_j∂ I(,)/∂θ_ℓ dA, which subsumes the information of all detector pixels <cit.>. Specifically, the variance of any unbiased estimator of one of the parameters θ_ℓ, ℓ = 0,1,2,3, is lower-bounded by (Δθ_ℓ)^2 ≥ [ℐ^-1 () ]_ℓℓ per measurement repetition, in the limit of many repetitions. Here, ℐ^-1 denotes the inverse matrix of (<ref>). Figure <ref> shows the CRB for an infinite planar detector (_=_z) at varying distance Z, in an exemplary setting with polarizability χ_0 = 13.0nm^3 at = 532nm and a number N^ sc = σ_ tot cϵ_0 |E^ in|^2 τ/2ħ of scattered photons, with σ_ tot = 2^4|χ_0|^2/3π the total scattering cross section. We plot the CRB for (a) x_0, (b) z_0, and (c) χ_0 estimation, comparing forward (Z>0) and backward scattering (Z<0) at a point dipole, as well as forward scattering at a polarization density of size a_0 = 25nm; see Sec. I in <cit.>. The CRB always saturate to a distance-independent value in the far field, |Z|≳. Conversely, the dipole fields (<ref>) diverge at the scatterer position, and so does the Fisher information, which implies that the CRB would vanish for an ideal detector placed arbitrarily closely. In the intermediate near field not too close to the dipole, (4πχ_0)^1/3≪ |Z| ≪, we can neglect the contribution ^ sc×^ sc* to the Poynting vector, which results in the scaling Δθ_0 ∼ |Z|^2 and Δθ_ℓ≠ 0∼ |Z|^3 for the CRB of polarizability and position, respectively. This scaling is seen in the diagrams for |Z|≲ 0.1, though finite-size corrections limit the precision when |Z|∼ a_0. The dashed line marks the fundamental QCRB, optimized over all possible far-field detection schemes. We will define and extend it to the near field in the following. Here we observe that the saturated bounds on the right of Fig. <ref> are worse than the QCRB, which shows that the specified detection scheme is not optimal for estimating the parameters of interest. This is in contrast to imaging setups such as interferometric scattering microscopy <cit.>, coherent bright field microscopy <cit.>, or dark-field microscopy <cit.>, where the QCRB can be reached under certain conditions <cit.>. Near-field QCRB.—For any parameter estimation problem with quantum systems, the asymptotically reachable precision can be bounded by minimizing the CRB over all conceivable quantum measurement schemes. The resulting QCRB then depends solely on the quantum state of the probed system and its sensitivity to the parameters, but it may not be practically attainable. In our case, the system is the spectrum of electromagnetic modes around the scatterer and the state results from the unitary interaction between the scatterer and those modes described by the total Hamiltonian = ∑_,ħ c k _^_ + ħω_0 ^ + _I. The asymptotic Gaussian initial state |ψ^ in (-∞) at time t→-∞ describes an incident coherent light pulse with a spectral amplitude profile ^ in = (α^ in_)_, and the scatterer in the ground state, |ψ^ in (t) = ⊗_,_(α^ in_ e^-ick t) |⊗ |0, with (α) the displacement operator. Given the linear interaction Hamiltonian (<ref>), the state remains Gaussian at all times and is therefore fully characterized by the time evolution of its first and second moments in the mode operators, which depend on the parameters . At each point in time t, the reduced state of the radiation field is determined by a vector of mean coherent amplitudes, = with elements α_ (t) = _ (t), and by covariance matrix blocks Ξ = 2[∘^ - ∘^* ] - 𝕀 and Υ = 2[∘ - ∘ ] with '∘' denoting the dyadic product. The QCRB lower-bounds the precision of parameter estimates by an optimum over all possible measurement schemes of the field at time t, (Δθ_ℓ)^2 ≥ [𝒥^-1(,t)]_ℓℓ, set by the inverse of the quantum Fisher information matrix 𝒥 (,t). For Gaussian states, this matrix reads as <cit.> 𝒥_jℓ (,t) = 2 ∂^*/∂θ_j ∂/∂θ_jΞ Υ Υ^* Ξ^* ^-1∂/∂θ_ℓ ∂^*/∂θ_ℓ + 𝒱_jℓ (,t) ≈ 2 [ ∂^*∂θ_j·∂∂θ_ℓ + c.c. ] + 𝒱_jℓ (,t) - 2∂^*/∂θ_j ∂/∂θ_jΞ - 𝕀 Υ Υ^* Ξ^* - 𝕀∂/∂θ_ℓ ∂^*/∂θ_ℓ. In the second line, we expand the inverse of the covariance matrix to first order around the identity matrix, a good approximation for realistic weak scatterers. The lengthy additional term 𝒱 does not depend on the amplitudes and is thus present even when there is no incident light. It stems from the higher-order effect that the presence of the scatterer squeezes the surrounding mode vacuum, which for realistic light intensities would add only little to the information contained in the -terms in (<ref>). Assuming that the parameter estimation is based on coherent amplitude measurements, we can safely ignore 𝒱 in the following. The Heisenberg time evolution of the field operators under can be solved in a lengthy calculation assuming weak coupling (Sec. III in <cit.>). In particular, the mean amplitudes (t) are linearly related to the incident ^ in of the input state (<ref>), α_(t) = ∑_( u_,α_^ in e^-ick t + v_,α_^ in * e^ick t). The transformation coefficients are u_, = √(kp)/L^3_·_x/1+(a_0p)^2_·_x/1+(a_0k)^2χ (ck)e^-i(-)·_0/p-k-i0^+ + δ_,, v_, = -√(kp)/L^3_·_x/1+(a_0p)^2_·_x/1+(a_0k)^2χ (ck) e^-i(+)·_0/p+k. The matrix elements of Ξ - 𝕀 and Υ can also be given. However, since they themselves are weak-coupling corrections, their contribution in the last line of (<ref>) can be safely neglected, as we demonstrate in Sec. IV of <cit.>. The expression (<ref>) simplifies greatly in the far field. Introducing the asymptotic output amplitudes ^ out as α_^ out = lim_t→∞α_ (t) e^ickt and treating the -modes as a continuum (see Sec. V in <cit.>), we arrive at α_^ out = α_^ in + i p/4π^2 ∫ d^3 k δ (p-k) χ (ck) e^i(-)·_0 ×∑_ (_x·_) (_x·_) α^ in_. This amounts to elastic light scattering via a dipole polarizability, described by the linear response function χ(ω) = d_0^2 ω_0/ħϵ_0 (ω_0^2-ω^2). Far off resonance, it is approximately constant, χ (ω≪ω_0) ≈ d_0^2/ħϵ_0ω_0 ≈χ_0, reconciling the quantum oscillator model with the previous phenomenological description based on the polarizability χ_0 = χ(). Indeed, we show in Sec. VI of <cit.> that the light field expectation values for monochromatic input match the dipole radiation terms (<ref>). To leading weak-coupling order in the far field, the QFI matrix (<ref>) reduces to a diagonal matrix with elements [𝒥_ℓℓ (,∞)]_ℓ=0^3 = 8 ^4 Φ/15π[ 5, ^2|χ_0|^2, 2^2|χ_0|^2, 7^2|χ_0|^2 ] , with Φ = (1/L^2) ∑_ |α_^ in|^2 = cϵ_0 |E^ in|^2 τ/2ħ the number of incident photons per area. Consequently, the far-field precision limits for the polarizability and position of the scatterer scale with the incident wavelength like Δθ_0 ∝^2 and Δθ_1,2,3∝^3, respectively. Our result proves that the scattering matrix approach to QCRB <cit.> is valid for a single quantum dipole scatterer. The relative error bound of a polarizability estimate, Δχ_0 /χ_0 ≥ 1/2√(N^ sc), and the error bounds of position estimates relative to the wavelength, Δ_0/≥1/4π(√(5),√(5/2),√(5/7))/√(N^ sc), are all determined by the inverse square root of the number of scattered photons, N^ sc = σ_ totΦ. At transient times, while the scattering process is taking place, the QCRB improves drastically. The near field close to the scatterer temporarily occupies modes of much shorter wavelengths than the incident light pulse and thus contains more information about the parameters than the asymptotically outgoing light. For an ideal point dipole, the information content (<ref>) would even diverge, rendering this often used idealisation invalid here. Figure <ref> shows how the QFI about (a) x_0-position and (b) polarizability evolves in time, relative to the respective far-field values from (<ref>). We assume a light pulse of central wavelength = 532nm, with a Gaussian field profile E^ in (t) = E^ in e^-π t^2/2τ^2 of temporal width = 9.4fs, corresponding to α_^ in∝ E^ in e^-(k-2π/)^2^2/2π / i√(k) with = k_z and _ = _x. The scatterer has the size a_0=/21 ≈25nm, the polarizability χ_0 = 13.0nm^3, and is resonant to 2π c/ω_0 = 100nm; see Sec. IV in <cit.> for additional results. As the light pulse approaches, the information content in the field builds up and oscillates at about twice the optical frequency. The peak position information is reached when the pulse hits the scatterer around t=0, amplifying the far-field value here by more than 15 times. The peak value grows like (/ a_0)^4 with decreasing scatterer size a_0 → 0 assuming constant polarizability. The oscillations in Figure <ref> (b) show that information about the polarizability is enhanced in the near-field. While the QFI never exceeds the far-field limit here, it would for smaller a_0, amplifying like (/a_0)^2 for a_0→ 0. While the position uncertainty does not relate in a simple manner to the transient number of scattered photons N^ sc (t) in the near field, we find that the QCRB on polarizability estimates obeys Δχ_0 /χ_0 ≥ 1/2√(N^ sc(t)) at all times. Discussion.—We derived the (Q)FI in the fields scattered by a dipole both in a phenomenological model and in a time-dependent quantum scattering model. The former assumes an idealized time-integrating detector that could potentially be realized in experiments. We obtain CRB for location and polarizability estimation that improve in the near field with the third and the second power of the detector distance, respectively. The quantum model, on the other hand, provides us with a snapshot of the information content in the state of the field at a given point in time. The QCRB are independent of the measurement scheme, but depend on the scatterer size a_0, vanishing like (a_0/)^2 and a_0/ for location and polarizability estimation, respectively. Our calculations confirm that the transient state of the near field contains more information about the scatterer than what photodetectors could pick up at a distance. During the transient dipole-field dynamics, information flows back and forth between the dipole and the surrounding field, causing a pronounced oscillatory enhancement of the QFI during the scattering process, |t|≲, even though a fraction of the incident pulse energy has not reached the scatterer yet. After the interaction ceases, the near-field information is irrevocably lost. The far-field QCRB derived from (<ref>), √(N^ sc)Δχ_0 /χ_0 ≥ 0.50 and √(N^ sc)Δ_0/≥ (0.18,0.13,0.07), are independent of a_0 as long as a_0 ≪. They provide a lower bound for microscopy applications, regardless of the light collection geometry <cit.>. Our analysis of the textbook example of dipole radiation touches upon foundational concepts such as ultraviolet divergences and gauge invariance. The near-field QFI diverges in the point-dipole limit, which forced us to introduce a high-frequency cutoff amounting to a finite size a_0 of the dipole scatterer. At the same time, the QFI depends on the chosen electromagnetic gauge that fixes the light-matter coupling Hamiltonian <cit.>, indicating that the QFI has a different physical meaning in each gauge. In particular, gauge transformations that depend on the dipole position _0 can change how much information about _0 is contained in the (transverse) field degrees of freedom. By fixing the multipolar gauge, we ensured that any information exchange between the dipole scatterer and a model detector comprised of dipoles is exclusively mediated by the transverse field (see Sec. VII in <cit.>). Our QCRB results can thus be viewed as a fundamental bound on the achievable measurement precision with standard photo-detectors. Our near-field assessment compares favorably with far-field super-resolution techniques like single-molecule localization microscopy <cit.> or spatial mode demultiplexing <cit.>. Our results show that, when tracking particles in the near field, one could achieve a higher signal-to-noise ratio per detected photon. This could facilitate tracking within sensitive biological specimens <cit.> at much lower photo-damage. Harnessing the near-field advantage comes with the experimental challenge of placing a physical detector into the near field. This would not only affect the mode structure of the field, but currently also suffers from coupling inefficiencies: in near-field scanning optical microscopy <cit.>, the nanotip or aperture that scans across the sample can only pick up a fraction of the near-field light; photon-induced or optical near-field electron microscopy <cit.> suffers from a limited conversion efficiency between light and electrons. Follow-up research could include measurement back-action and the influence of the detector geometry on the near field. Specifying a detection mechanism based on, e.g., dipole-dipole interactions could resolve the subtleties regarding gauge freedom. It will further be interesting to compare our scattering treatment to Markovian quantum trajectory models <cit.>, which describe the information flow out of an open system (here: the scatterer) as a continuous measurement process. While we studied near fields in optical microscopy, our findings can be extended to the radio-frequency domain, provided that an appropriate noise model is chosen. Potential applications would range from communication and positioning <cit.> to the design of avalanche safety equipment <cit.>. We acknowledge fruitful discussions with Jonathan Dong. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101017902. Supplemental Material § CRB FOR A CLASSICAL HERTZ DIPOLE Here we complement the phenomenological approach of the main text, assuming stationary radiation from a classical field-induced Hertz dipole. We provide additional results for a hemispherical detector and discuss the behavior of the CRB in the near field. Finally, we state the result for a regularized finite-size dipole instead of a point dipole, for comparison to the quantum model. Hemispherical Detector As an alternative to the planar detector discussed in the main text (Fig. fig:sketch), we consider a hemispherical detector of radius R around the dipole scatterer, oriented in forward (Z>0) or backward (Z<0) direction of the incident light, as sketched in Fig. <ref>. The task is to resolve small deviations of the scatterer position and polarizability based on the detected photons from the scattered light. Using the same detector model and parameters as for Fig. fig:CRB_plots_planar in the main text, we plot the CRBs on position and polarizability in Fig. <ref>. The results are qualitatively similar to those of the planar detector, except for small oscillations as a function of the radius R due to interference effects at the detector edge. Asymptotic Behavior Near the dipole scatterer (r ≲ 1), the scattered fields scattered_field in the main text scale like ^ sc∼ 1/r^3, ^ sc∼ 1/r^2. Since we can also assume that (4πχ_0)^1/3≪ |Z|,R for most of the plotted range of detector distances, the terms in the Poynting vector, () = Re{[^ in () +^ sc ()] × [^ in () +^ sc ()]^*}/2μ_0, obey the hierarchy ^ in×^ in≫^ sc×^ in + ^ in×^ sc≫^ sc×^ sc. Therefore, in the formula eq:FIMPoissonContinuum for the FI matrix ℐ in the main text, the intensity I is dominated by the incident light term, while the derivatives of the intensity with respect to the scatterer parameters are dominated by the cross terms. To leading order in r with r∼ |Z|,R, we have ∂ I/∂θ_0 ∼ 1/r^3 and ∂ I/∂θ_j>0∼ 1/r^4. Integration over the detector surface contributes another factor 2π R^2 in the hemispherical case. In the planar case, the relevant detector area in the near field is of the order of π Z^2. Hence, the diagonal entries of the FI matrix scale like ℐ_00∼ R^-4,Z^-4 and ℐ_jj∼ R^-6,Z^-6 for j=1,2,3. Accordingly, the CRB for polarizability and position estimates scale like Δχ_0 ∼ R^2,Z^2 and Δ_0 ∼ R^3,Z^3, respectively, matching the slopes in Fig. fig:CRB_plots_planar in the main text and in Fig. <ref>. In the far field, we simply have ∂ I/∂θ_j ∼ R^-1,|Z|^-1 for all j, whereas I → const. Hence, the entries of the FI matrix should approach a constant value for R,|Z| →∞, which is also in accordance with our results. Finite-size scatterer For the case of a regularized dipole scatterer of effective size a_0>0, we shall interpret the field expectation values (<ref>) and (<ref>) obtained from the quantum model in Supplementary Section <ref> as classical fields with χ_0 ≡χ(c). Explicitly, ^ sc () = ^3 χ_0 E^ in/2π i[ (_×_x) ×_/ρ ℰ_1(ρ) . . + ( ℰ_2(ρ)/(ρ)^2 - ℰ_3(ρ)/(ρ)^3) (_x - 3_(_x·_)) ], ^ sc () = ^3 χ_0 E^ in/2π c i[ ℰ_2(ρ)/iρ - ℰ_3(ρ)/i(ρ)^2] _×_x, where ℰ_1(ρ) = i/1+( a_0)^2[e^iρ+e^-ρ/a_0/( a_0)^2], ℰ_2(ρ) = -1/1+( a_0)^2(e^iρ-ie^-ρ/a_0/ a_0), ℰ_3(ρ) = i/1+( a_0)^2(e^iρ-e^-ρ/a_0). § INTERACTION HAMILTONIAN Here, we review the steps leading to the dipole Hamiltonian interactionHamiltonian in the main text, which describes the scatterer-field interaction. Our starting point is the minimal coupling Hamiltonian of non-relativistic quantum electrodynamics between a single bound charge q and the electromagnetic field in the multipolar (PZW) gauge with respect to the dipole position _0 <cit.>, Ĥ = 1/2m[_e - q(_e+_0)]^2 + U(_e) + V_ self + ϵ_0/2∫ d^3 x {[ () + 1/ϵ_0P̂_T () ]^2 + c^2[∇×_T ()]^2 }. The charge is a quantum particle trapped in a potential U sourced by the opposite charge -q fixed at _0. The displacement of q from _0 is described by conjugated position and momentum operators _e, _e. The chosen gauge leads to the (infinite) Coulomb self-energy terms V_ self of both charges, while the quantized light field is described by the transverse vector potential _T() and its canonical conjugate (). The full multipolar-gauge vector potential is then given by its Fourier space representation () = _T() + ∇∫d^3k' /(2π)^3g_T(',)·(') = _T() - ∫d^3k' /(2π)^3_T(') e^-i'·_0 where g_T is defined as g_T(,) = - e^-i·_0∑__(_·) The Hamiltonian (<ref>) also contains the transverse polarization _T whose definition is also gauge-dependent. In the PZW gauge, it is given by _T() = - q [ _T (, _0 + _e) - _T (, _0) ] = qe^-i·_0∑__ (_·_e). We work in the electric dipole, or long-wavelength approximation. This amounts to assuming that the wavelengths impinging on the scatterer are much longer than the extent of the scatterer. More concretely, we assume ·_e ≪ 1. Fourier-transforming (<ref>), we have () = ∫d^3k /(2π)^3_T() [e^-i· - e^-i·_0]. Substituting = _0 + _e, and e^-i·_e-1 ≈ 0, this immediately yields (_0+_e) ≈ 0. The Hamiltonian (<ref>) then becomes Ĥ = _e^2 /2m+ U(_e) + V_ self + ϵ_0/2∫ d^3 x { ()^2 + c^2[∇×_T ()]^2 } + 1/2∫ d^3 x {1/ϵ_0_T ()^2 + 2_T()·() }. The _T()^2 term only involves the _e operator, and can be subsumed into U(_e) by defining U'(_e) = U(_e) + ∫ d^3 x _T ()^2/2ϵ_0. The last term describes the charge-field interaction, Ĥ_I = ∫ d^3 x () ·P̂_T () = ∫d^3k/(2π)^3 () ·P̂_T^† () = q ∫d^3k/(2π)^3 e^i·_0 () ·_e = q (_0)·_e, where we have used (<ref>) along with the fact that is transverse so that ∑_=1,2_(()·_) = (). This is the only term that entangles the field with the charge, whereas all the other terms act on either the field or the charge. Correspondingly, we define the free Hamiltonian Ĥ_0 = Ĥ - Ĥ_I. The multipolar gauge can be modified by introducing a high-frequency cutoff, k ≲ 1/a_0, to avoid the high-energy divergences inherent in the dipole approximation <cit.>. This amounts to setting g_T(,) = - e^-i·_0/1+(a_0k)^2∑__(_·), ensuring that _T(,) → 0 for k→∞. The vector potential and polarization in (<ref>) and (<ref>) are then () = _T() - ∫d^3k' /(2π)^3_T(') e^-i'·_0/1+(a_0k)^2, _T() = qe^-i·_0/1+(a_0k)^2∑__ (_·_e). The assumption (_0 + _e) ≈ 0 continues to hold as long as all modes with k ≥ 1/a_0 are unpopulated. This is true, and (<ref>) remains valid, if the scatterer is larger than a_0. We now quantize the field in the usual manner, by introducing discrete plane-wave modes in a box of volume L^3 and their operators, [_ , _^] = δ_δ_μ, such that _T () = ∑_√(ħ/2ϵ_0 L^3 c k)_( e^i·_ + e^-i·_^), () = -i ∑_√(ħ ck/2ϵ_0 L^3)_( e^i·_ - e^-i·_^). The continuum limit ∑_→ (L/2π)^3 ∑_∫ d^3 k will be carried out later. The interaction Hamiltonian becomes Ĥ_I = ∫ d^3 x () ·P̂_T () = ∫d^3k/(2π)^3 () ·P̂_T^† () = q ∫d^3k/(2π)^3 e^i·_0 () ·_e /1+(a_0k)^2 = q ∑_√(ħ c k/2ϵ_0 L^3)_·_e/1+(a_0k)^2[ e^i_0/ i â_ + h.c. ]. Finally, we restrict the motion of the bound charge to a one-dimensional harmonic motion in x-direction, setting the dipole operator q_e ≡ d_0 _x ( + ^), with the associated ladder operator. This leaves us with the model Hamiltonian interactionHamiltonian in the main text. By fixing the regularized gauge in (<ref>) and subsequently assuming (_0+_e)≈ 0, we have prevented the interaction Hamiltonian (<ref>) from coupling _e to any modes with wavelength λ≪ 2π/a_0. Effectively, the regularizing factor in -space relaxes the point-dipole assumption and gives the scatterer a finite transverse polarization density with profile e^-r_0/a_0/(4π ra_0^2). This is no violation of gauge invariance: mode populations are not directly measurable. What must be gauge-invariant are the probabilities of photon detection events. To calculate the latter, one must specify a concrete coupling between the detector and the system. We discuss this point in more detail later, in Supplementary Section <ref>. § TIME EVOLUTION Here we solve the combined quantum time evolution of the harmonic field and scatterer degrees of freedom under the Hamiltonian = _0 + _I, assuming an asymptotically free incident coherent pulse and the scatterer in the ground state at initial time t_0→ -∞, see eq:psiIn in the main text. The goal of the following calculation is to integrate the Heisenberg equations of motion for the mode operators. From this, we obtain the coherent amplitudes and the covariance matrix blocks Ξ,Υ used to calculate the quantum Fisher information matrix in the main text. The initial condition for the incident light amplitudes, α^ in_, is chosen such that the wave packet of the pulse is centered around the dipole position _0 at t=0. In the Hamiltonian, we identify the bare terms, _0 = ∑_,ħ c k ^__ + ħω_0 ^, and the interaction Hamiltonian, _I = ( + ^) d_0 _x ·(_0) = ( + ^) ∑_√(ħ c k/2ϵ_0 L^3)d_0 (_·_x)/1+(a_0k)^2[ e^i·_0/i_ + h.c. ]. The mode operators satisfy [_0, _] = -ħ ck a_ and [_0, ] = -ħω_0 with respect to the bare term, and so the equations of motion for the respective Heisenberg-picture mode operators take the form d/dt_H (t) = - iω_0 _H(t) + i/ħ[ _I, ]_H (t); d/dt_,H (t) = - i c k _, H (t) + i/ħ[ _I , _]_H (t) . Carrying out the remaining commutator with (<ref>) and integrating both equations of motion, we have the coupled integral equations _,H (t) = _^ in e^-icp(t-t_0) + C_∫_t_0^t dt' [ _H (t') + _H^ (t')] e^-icp(t-t'), C_ = √(ck/2ϵ_0ħ L^3)d_0 _x·_/1+(ka_0)^2 e^-i·_0 ; _H (t) = ^ in e^-iω_0(t-t_0) + ∫_t_0^t dt' e^-iω_0(t-t')∑_[C__,H^ (t') - C_^* _,H (t') ]. For clarity, we are now denoting the bare (Schrödinger-picture) mode operators acting on the separate Hilbert spaces of dipole and field by ^ in and ^ in_, as they appear as the initial conditions at t=t_0 here. Next we insert (<ref>) into (<ref>) to obtain an implicit integral equation for the field mode operators, _,H (t) = _^ in e^-icp(t-t_0) + C_∫_t_0^t dt' [ ^ in e^-iω_0(t'-t_0) + ^ in e^iω_0(t'-t_0)]e^-icp(t-t') + C_∫_t_0^t dt' e^-icp(t-t')∫_t_0^t' dt”(e^-iω_0(t'-t”) - c.c.) ∑_[ C__,H^ (t”) - h.c. ] . Under the usual assumption of weak coupling between scatterer and field modes, we may truncate (<ref>) at second order in C_ and replace the _,H (t”) under the double integral by the bare terms _^ ine^-ick(t”-t_0). This results in the expansion _,H (t) ≈_^ in e^-icp(t-t_0) + _,H^(1) (t) + _,H^(2) (t), with the first- and second-order contributions _,H^(1) (t) = C_∫_t_0^t dt' [ ^ in e^-iω_0(t'-t_0) + ^ in e^iω_0(t'-t_0)]e^-icp(t-t'), _,H^(2) (t) = C_∫_t_0^t dt' [ ∫_t_0^t' dt”(e^-iω_0(t'-t”) - c.c.) ∑_ [C__^ in e^ick(t”-t_0) - h.c.] ]e^-icp(t-t'). Coherent Amplitudes To obtain the coherent amplitudes, we take the expectation value of (<ref>) with respect to |ψ^ in, α_ (t) = ψ^ in|_,H (t)|ψ^ in. Since the scatterer is initially in the ground state, the ^ in terms vanish: ψ^ in | ^ in | ψ^ in = 0. We also recall that the input pulse amplitudes α_^ in are defined with respect to the scattering time t=0, ψ^ in | _^ in|ψ^ in = α_^ in e^-ickt_0. Hence, we have α_(t) ≈α_^ in e^-i c p t + α_^(2)(t), with α_^(2)(t) = C_∫_t_0^t dt' e^-icp(t-t')∫_t_0^t' dt”(e^-iω_0(t'-t”) - c.c.) ∑_[ C_α_^ in * e^ick t” - h.c. ]. Note that the -sum is simply the expectation value of the field quadrature at position _0 and time t”, ∑_ (C_α_^ in e^ickt” - c.c.) = -i d_0 ψ^ in(t”) | (_0) | ψ^ in(t”) . Here, | ψ^ in(t”) = e^-i_0(t”-t_0)/ħ| ψ^ in describes the incident light pulse of temporal width τ propagated from the initial t_0 to the time t”. Since the center of this pulse is chosen to hit the scatterer position at t”=0, the field expectation value (<ref>) vanishes for |t”| ≫τ. Letting h(t”) ≡ -i d_0 (e^-iω_0(t'-t”) - c.c.) ψ^ in(t”) | (_0) | ψ^ in(t”) be the t”-integrand function in (<ref>), it is then clear that its integral ∫_t_0^t' dt” h(t”) converges to a finite value ζ in the limit t_0 → -∞. In particular, this convergence is uniform over t' ∈ (-∞, t), and we claim that ζ = lim_η→ 0lim_t_0→ - ∞∫_t_0^t' dt” e^η t” h(t”), and that convergence in η>0 is uniform over t' ∈ (-∞, t). To show this, let ϵ>0. By virtue of the triangle and the Cauchy-Schwarz inequalities, | ζ - ∫_-∞^t' dt” e^η t” h(t”) | ≤| ζ - ∫_T^t' dt” h(t”) | + | ∫_-∞^T dt” e^η t” h(t”) | + | ∫_T^t' dt” (1-e^η t”) h(t”) | ≤| ζ - ∫_T^t' dt” h(t”) | + e^η T| ∫_-∞^T dt” h(t”) | + |1-e^η T| | ∫_T^t' dt” h(t”) |, where an arbitrary intermediate time T < t' was introduced. Due to the aforementioned uniform convergence of ∫_t_0^t' dt” h(t”), there exists a T_0 (sufficiently close to t_0 → -∞, and independent of t') such that both of the first two terms in (<ref>) are less than ϵ/3 whenever T ≤min(T_0, t'). Having chosen this T_0, we set T=min(T_0, t') then choose a sufficiently small η such that the last term is also less than ϵ/3. This choice is independent of t', because either T_0 < t', in which case T = T_0 independent of t', or T_0 ≥ t', in which case T=t'. In the latter case the last term in (<ref>) is identically zero, so η may be chosen freely. In conclusion, this choice of T and η is independent of t' and bounds the entire expression by ϵ, proving our claim. We can now make use of the auxiliary construction (<ref>) with η→ 0 to take the limit t_0 →-∞ and carry out the integrals in (<ref>). This yields α_(t) = ∑_ u_,α_^ ine^-ickt + v_,α_^ in * e^ickt + 𝒪(d_0^4), with the transformation coefficients u_, = √(kp)/L^3(_x ·_) (_x ·_) e^i(-)·_0/[1+(a_0 k)^2][1+(a_0 p)^2]χ(ck+i0^+)/p-k-i0^+ + δ_,, v_, = √(kp)/L^3(_x ·_) (_x ·_) e^-i(+)·_0/[1+(a_0 k)^2][1+(a_0 p)^2]χ(ck-i0^+)/p+k, and the response function χ(ω) = d_0^2/2ϵ_0ħ( 1/ω + ω_0 - 1/ω - ω_0), χ (ω± i0^+) = lim_η→ 0χ (ω± iη). In the off-resonant case ck≠ω_0, we may omit the ± i0^+ in the argument, resulting in equation eq:uvpk_MainText in the main text. The δ_,-term in (<ref>) represents the zeroth-order contribution of the unscattered field. For future convenience, let us also calculate the real part of the scatterer's coherent amplitude to leading order, which we obtain by taking the expectation value of (<ref>) with respect to |ψ^ in and using ^ in|ψ^ in = 0: β(t) + β^*(t) = ψ^ in| _H (t)|ψ^ in + c.c. = ∫_t_0^t dt' e^-iω_0(t-t')∑_[C_α_^*(t') - C_^* α_(t') ] + c.c. To leading order in d_0, we replace α_(t') ≈α_^ in e^-ickt', and once again, we can thus identify the field expectation value (<ref>) under the t'-integral and leverage (<ref>) to introduce the factor e^η t'. We are left with β(t) + β^*(t) = ∑_( 1/ick+iω_0 - 1/ick-iω_0) C_α_^ in* e^ickt + c.c. = 2ϵ_0ħ/i d_0^2∑_ [χ(ck-i0^+) C_α_^ in* e^ickt - c.c.] This yields the relation i d_0 _x ·_/√(2_0 ħ c p L^3)e^-i·_0/1+(a_0p)^2 [β(t) + β^*(t)] = e^-i·_0/L^3_x ·_/1+(a_0 p)^2∑__x ·_/1+(a_0k)^2[ χ(ck-i0^+)α_^ in* e^ickt - c.c. ], which will be useful when computing the coherent amplitudes in different gauge representations. Covariance Matrix With the time-evolved Heisenberg-picture mode operators at hand, we can not only evaluate the mean coherent amplitudes, but also the second moments, i.e., covariances. This is all we need here since the system remains Gaussian at all times due to the Gaussian initial state and the quadratic Hamiltonian. . The covariance matrix, contains all second-order cumulants between all combinations of the mode operators and their hermitean conjugates, reflects the vacuum properties of the state and does not depend on any of the coherent displacements. In the absence of coupling between the modes, it would simply reduce to the identity matrix. Hence, we can expand it perturbatively around the identity in the weak-coupling regime considered here. Let us, for the moment, introduce the shorthand notation _n with n=,b subsuming any of the mode operators _ or _b ≡. Given a (Gaussian) quantum state ρ(t) with mean displacements α_n (t) = [ρ (t) _n], the covariance matrix can be expressed as <cit.> σ = [ Ξ Υ; Υ^* Ξ^* ], with Ξ_nm = [ ρ(t) {_n - α_n (t), _m^ - α_m^* (t) }] = 2 ψ_ in | [_m,H^ (t) - α_m^* (t)][_n,H(t) - α_n (t)]|ψ_ in + δ_nm , Υ_nm = [ ρ(t) {_n - α_n (t), _m - α_m (t) }] = 2 ψ_ in | [_n,H(t) - α_n (t)][_m,H (t) - α_m (t)]|ψ_ in . Here, we only access the submatrices of the field degrees of freedom, (n,m)=(,). Inserting the perturbative weak-coupling expansion (<ref>) of the mode operators, we arrive at Ξ_, = δ_, + 2 ψ_ in | _,H^(1)(t) _,H^(1)(t) | ψ_ in + 2 ψ_ in| [_,H^(2)(t) - α_^(2)*(t)] (_^ ine^icpt_0 - α_^ in)e^-icpt + (_^ in e^-ickt_0 - α_^ in *)e^ickt[_,H^(2)(t) - α_^(2)(t)] | ψ_ in, Υ_, = 2 ψ_ in | _,H^(1)(t) _,H^(1)(t) | ψ_ in + 2 ψ_ in| [_,H^(2)(t) - α_^(2)(t)] (_^ ine^icpt_0 - α_^ in)e^-icpt + (_^ ine^ickt_0 - α_^ in)e^-ickt[_,H^(2)(t) - α_^(2)(t)] | ψ_ in, which are both of second order in the weak coupling, i.e., valid up to 𝒪(d_0^4). Here, we have exploited that α_^(1)(t) = ψ_ in | _,H^(1) | ψ_ in = 0, because ψ_ in | ^ in | ψ_ in = 0. Moreover, since (_^ ine^ickt_0 - α_^ in)|ψ^ in = 0, it follows that the entire second line in (<ref>) vanishes, as well as the first half of the second line in (<ref>). Substituting (<ref>) and (<ref>), commuting the mode operators, and performing the remaining time integrals yields the explicit matrix elements Ξ_, - δ_, = 2 C_^* C_e^-iω_0(t-t_0) - e^ick(t-t_0)/-ick-iω_0e^iω_0(t-t_0) - e^-icp(t-t_0)/icp+iω_0 Υ_, = 2 C_ C_e^-iω_0(t-t_0) - e^-ick(t-t_0)/ick-iω_0e^iω_0(t-t_0) - e^-icp(t-t_0)/icp+iω_0, + 2 C_ C_[ 1/ick+iω_0( 1 - e^-ic(p+k)(t-t_0)/icp+ick - e^-iω_0(t-t_0)-ick(t-t_0) - e^-ic(p+k)(t-t_0)/icp-iω_0) . . - 1/ick-iω_0( 1 - e^-ic(p+k)(t-t_0)/icp+ick - e^iω_0(t-t_0)-ick(t-t_0) - e^-ic(p+k)(t-t_0)/icp+iω_0) ] In order to take the limit t_0 → -∞, note that in the end, the covariance matrices will be applied to coherent amplitude vectors representing pulses with a finite temporal width. In our case, we will have terms such as ∑_, [∂α_^* (t)/∂θ_j] Ξ_, [∂α_ (t)/∂θ_l], evaluated in the continuum limit and at finite t; see Supplementary Section <ref> below. Any contribution that oscillates with e^± ickt_0 or e^± icpt_0 will thus converge to zero as t_0 → -∞. The residual time-independent covariances representing the squeezed mode vacuum of the weakly coupled scatterer and light field are Ξ_, - δ_, = 2 C_/cp + ω_0C_^*/ck + ω_0, Υ_, = 2 C_/ω_0 - ckC_/cp + ω_0 - 2 C_ C_/cp + ck[ 1/ ck + ω_0 - 1/ck-ω_0] = - 2C_ C_/ck+cp( 1/cp+ω_0 + 1/ck+ω_0) . § QUANTUM FISHER INFORMATION Here we provide details on the calculation leading to the QFI matrix of the field state at a given time t with respect to the scatterer parameters . As the state is Gaussian, the QFI can be expressed in terms of mean displacements, covariances, and derivatives thereof with respect to the parameters. We will give the relevant expressions in the continuum limit, which we have used in our numerical evaluation of the QFI. Gaussian State QFI The quantum Fisher information matrix 𝒥 of a multimode Gaussian state with respect to some parameters [Eq. (GaussianQFI) in the main text] depends on both the vector of all coherent displacements (θ) and on the covariance matrix σ (θ) defined in (<ref>); see Ref. <cit.>, which also contains the explicit form of the here omitted vacuum contribution 𝒱. The latter depends neither on the displacements nor on time, and it has the same value regardless of whether any coherent light scattering occurs at all. The dominant contribution comes from the parameter sensitivity of the displacements (), compactly written as the bilinear form GaussianQFI in the main text, reflecting that this information about the scatterer is obtainable by measuring those coherent amplitudes and subsequently deducing an estimate for θ. The expression also depends on the inverse of the covariance matrix σ, which we can approximate in the weak-coupling regime by expanding it around identity, Ξ Υ Υ^* Ξ^* ^-1≈𝕀 - Ξ - 𝕀 Υ Υ^* Ξ^* - 𝕀, which leads to the second line of GaussianQFI in the main text. Indeed, at vanishing scatterer-field coupling, we have Ξ→𝕀 and Υ→ 0, and the leading-order expansion spares us the effort of performing a numerical matrix inversion. Derivatives of the Amplitude To compute the QFI, we need to take derivatives of the coherent amplitudes (<ref>) with respect to the parameters of interest. This is tedious but not difficult; we will now state the essential steps in the continuum limit L→∞. The derivatives with respect to the scatterer's coordinates θ_j = _0 ·_j and the off-resonant polarizability θ_0 = χ_0 = d_0^2/ħϵ_0 ω_0 are ∂α_(t)/∂θ_j≠0|_ r_0=0 = ∑_[ ∂ u_,^*/∂θ_jα^ in_ + ∂ v_,/∂θ_jα_^ in*] →i√(p)_·_x /1+(a_0p)^2∫_0^∞dk/2π√(k)/1+(a_0k)^2[ χ( ck+i0^+)α^ in(k, t)/ k- p+i0^+ (p_j - kδ_jz) + χ(ck-i0^+)α^ in *(k, t)/ k+ p (p_j + kδ_jz) ] ∂α_(t)/∂θ_0|_ r_0=0 = ∑_[ ∂ u_,^*/∂θ_0α^ in_ + ∂ v_,/∂θ_0α_^ in *] → - √(p)_·_x/1+(a_0p)^2∫_0^∞dk/2π√(k)/1+(a_0k)^2[ ∂χ( ck+i0^+)/∂θ_0α^ in(k, t)/ k- p+i0^+ + ∂χ( ck-i0^+)/∂θ_0α^ in *(k, t)/ k+ p] Here, the derivatives are evaluated at the current reference position of the scatterer, _0=0. Also, we use that the incident light propagates along the z-direction and is x-polarized, α_^ in = α_k_z^ inδ_k_x 0δ_k_y 0δ_ 1 with _ 1 = _x and k>0. In the continuum limit, this translates to ∑_α_^ in→∑_ (L/2π) ∫ dk α_k_z^ inδ_ 1, and we define the amplitude density per unit area, α^ in (k,t) = (α_k_z^ in/L^2) e^-ickt, which is nonzero strictly only for k>0. If we use the parameterization _ 1 = ×_x/|×_x|, _ 2 = /p×_ 1, = p(cosϑ_p _z + cosφ_psinϑ_p_x + sinφ_psinϑ_p_y), the last factor turns into _·_x = -δ_μ 2√(sin^2ϑ_psin^2φ_p+cos^2ϑ_p), so that ∂α_(t)/∂θ_j≠ 0 = 1/i[ δ_jz f_1(p,t) + p_j/p f_2(p,t) ] δ_μ 2√(sin^2ϑ_psin^2φ_p+cos^2ϑ_p), ∂α_(t)/∂θ_0 = f_3(p,t) δ_μ 2√(sin^2ϑ_psin^2φ_p+cos^2ϑ_p). Herein, the f's abbreviate the frequency integrals f_1(p,t) = p^1/2/1+(a_0p)^2∫_0^∞dk/2πk^3/2/1+(a_0k)^2[ χ( ck-i0^+) α^ in *(k,t)/k+p - χ( ck+i0^+)α^ in(k,t)/k-p+i0^+], f_2(p,t) = p^3/2/1+(a_0p)^2∫_0^∞dk/2πk^1/2/1+(a_0k)^2[ χ( ck-i0^+) α^ in *(k,t)/k+p + χ( ck+i0^+) α^ in(k,t)/k-p+i0^+], f_3(p,t) = p^1/2/1+(a_0p)^2∫_0^∞dk/2πk^1/2/1+(a_0k)^2[ ∂χ(ck-i0^+)/∂χ_0α^ in *(k,t)/k+p + ∂χ(ck+i0^+)/∂χ_0α^ in(k,t)/k-p+i0^+]. Lastly, from (<ref>), we immediately obtain the remaining derivative of the response function, ∂χ(ν)/∂θ_0 = ∂χ(ν)/∂χ_0 = χ(ν)/χ_0≈ 1. The last step amounts to the off-resonance approximation χ(ν) ≈χ_0. It implies that ∂α_ (t)/∂θ_0 ≈ [α_ (t)-α_^ in]/χ_0. This is as far as we are able to go analytically. The frequency integrals f_1,2,3 must be evaluated numerically. We can now proceed to calculate the QFI, omitting the vacuum contribution 𝒱. We begin by explicitly expanding the bilinear form GaussianQFI in the main text: 2∂^*/∂θ_j·Ξ∂/∂θ_l + c.c. = 4Re∑_,∂α_^*/∂θ_j∂α_/∂θ_lΞ_,, 2∂^*/∂θ_j·Υ∂/∂θ_l + c.c. = 4Re∑_,∂α_^*/∂θ_j∂α_/∂θ_lΥ_, In the following, we calculate the above expressions for all combinations of indices j, l. Position Estimation For j, l ≠ 0, the estimation parameters are the position coordinates of the scatterer, θ_j = _0·_j and θ_l = _0·_l. Substitution of (<ref>), (<ref>) and (<ref>) into (<ref>) gives 2∂^*/∂θ_j·Ξ∂/∂θ_l + c.c. 4Re[ ∫dΩ_p/(2π)^2 (sin^2ϑ_psin^2φ_p + cos^2ϑ_p) ×∫dp/2π p^2 {δ_j3 f_1(p,t) + p_j/p f_2(p,t) }^*{δ_l3 f_1(p,t) + p_l/p f_2(p,t) } + ∫dΩ_p dΩ_p'/(2π)^4 (sin^2ϑ_p'sin^2φ_p' + cos^2ϑ_p') (sin^2ϑ_psin^2φ_p + cos^2ϑ_p) ×∫dpdp'/(2π)^2 p'^2 p^2 {δ_j3 f_1(p',t) + p_j'/p f_2(p',t) }^* {δ_l3 f_1(p,t) + p_l/p f_2(p,t) }δΞ(p',p) ] 2∂^*/∂θ_j·Υ∂/∂θ_l + c.c. 4Re[ ∫dΩ_p dΩ_p'/(2π)^4 (sin^2ϑ_p'sin^2φ_p' + cos^2ϑ_p') (sin^2ϑ_psin^2φ_p + cos^2ϑ_p) ×∫dpdp'/(2π)^2 p'^2 p^2 {δ_j3 f_1(p',t) + p_j'/p' f_2(p',t) }^* {δ_l3 f_1(p,t) + p_l/p f_2(p,t) }^* Υ(p',p) ]. Recall that the p_j components in the curly brackets depend on the integration angle, c.f. (<ref>). The integrals simplify drastically, because any odd p_j-term will integrate to zero. In the first expression, only the summands with δ_j3δ_l3 or p_j^2 under the integral survive. In the δΞ and Υ expressions, all terms vanish except for the one with δ_j3δ_l3. The angular integrals over the remaining terms can be done analytically, leaving only the frequency integrals. Putting everything together, we have 𝒥_11(t) - 𝒱_11 = 8/15π∫_0^∞dp/2π p^2 | f_2(p,t) |^2 = 𝒥_22(t) - 𝒱_22/2, 𝒥_33(t) - 𝒱_33 = 2(𝒥_11(t) - 𝒱_11) + 8/3π∫_0^∞dp/2π p^2 | f_1(p,t)|^2 + 8/9π^2∫_0^∞dp'dp/(2π)^2 p'^2 p^2 ( f_1^*(p',t)f_1^*(p,t) Υ(p',p) - f_1^*(p',t)f_1(p,t) δΞ(p',p) + c.c. ), and 𝒥_jl - 𝒱_jl = 0 for j ≠ l. The three diagonal entries, which represent the Cramér-Rao precision bounds for estimating x_0,y_0,z_0, are plotted in Fig. <ref> for two scatterer sizes. Panel (e) corresponds to Fig. fig:QFI(a) in the main text. Polarizability Estimation For j,l=0, the estimation parameter is the weak-coupling polarizability, θ_0 = χ_0. Substituting (<ref>), (<ref>) and (<ref>) into (<ref>), and performing a calculation similar to the one above, we obtain the diagonal entry of the QFI matrix corresponding to polarizability estimation: 𝒥_00 (t) - 𝒱_00 = 4 ∫_0^∞dp/3π^2 p^2 {| f_3(p,t)|^2 - ∫_0^∞dp'/6π^2 p'^2 [f_3^*(p',t)f_3^*(p,t) Υ(p',p) + f_3^*(p',t)f_3(p,t) δΞ(p',p) + c.c. ] } . This is the quantity shown in Fig. fig:QFI(b) in the main text. At weak, off-resonant coupling, the covariance matrix terms δΞ and Υ will give only minor contributions to the QFI, as confirmed by our numerical assessment. Neglecting them, we can approximate 𝒥_00 (t) - 𝒱_00≈ 4 ∑_|∂α_ (t)/∂θ_0|^2 ≈4/|χ_0|^2∑_| α_(t) - α_^ in|^2 ≡4 N^ sc (t)/|χ_0|^2, where N^ sc (t) is the number of photons in the scattered field at a given time t. In other words, the QCRB for polarizability estimation, Δχ_0 /χ_0 = 1/2√(N^ sc) from Eq. FarFieldQFI in the main text, is not only valid in the far field, but for any point in time during the scattering process. Position–Polarizability Covariance The only remaining entries are off-diagonal ones with j=0, l>0. Substituting (<ref>), (<ref>), (<ref>) and (<ref>) in (<ref>) gives vanishing 𝒥_01(t) - 𝒱_01 = 0 and 𝒥_02(t) - 𝒱_02 = 0, while 𝒥_03(t) - 𝒱_03 = 2i ∫_0^∞dp/3π^2 p^2 { f_1(p,t)f_3^*(p,t) + ∫_0^∞dp'/3π^2 p'^2 [ f_1^*(p',t)f_3^*(p,t) Υ(p',p) + f_1^*(p',t)f_3(p,t) δΞ(p',p)] } + c.c. Numerical Methods The integrals in (<ref>), (<ref>), and (<ref>) were computed by discretizing wave number (frequency) space based on p_n = d sinh(Δ[n - n_0]) + k_0, with n an integer ranging from 0 to 𝒩, and n_0 chosen such that p_-1 < 0 ≤ p_0. This parameterization ensures that the resolution becomes coarser as one moves away from k_0, the wave number the incident wave packet is centered on. The scaling parameters were chosen as d/q = 2.5e-3 and Δ = 3.8e-2. The maximal index 𝒩 was chosen so p_𝒩 = 1.1e+3q ≫ 2π/a_0, ensuring that the hard cutoff imposed by the constraint n < 𝒩 has no effect. The functions f_i(p) defined in (<ref>) were calculated using the Sokhotski–Plemelj theorem, 1/ν + i0+ = 𝒫1/ν - iδ(ν). 𝒫 denotes the Cauchy principal value. The δ(ν) term was evaluated analytically in (<ref>), while the principal value integral was computed using the QUADPACK routine <cit.>. Our numerical evaluation could show that, in the off-resonant weak-coupling regime we consider here, the covariance matrix terms δΞ, Υ≠ 0 describing the squeezing of the mode vacuum due to the presence of the scatterer give rise to merely negligible corrections to the QFI. Curiously, these corrections only appear in 𝒥_03, 𝒥_00 and 𝒥_33; we plot them for our parameter settings in Fig. <ref>. A quick comparison to the values in Fig. fig:QFI of the main text and in Fig. <ref> confirms that the corrections are indeed negligible. Asymptotics Finally, let us remark on the asymptotic behavior of the QFI at large k_M = 1/a_0. The dominant integrals in (<ref>), (<ref>) and (<ref>) have the form 𝒥_jl - 𝒱_jl = ∫_0^∞ dp p^2 [1+(a_0p)^2]^-2ℱ_jl(p), where ℱ_jl(p) is a smooth function of p. If ℱ_jl(p) ∼ p^n for large p, then the integral will scale like 1/a_0^n+3 for small a_0, as can be observed by performing a change of variables p → a_0 p. Using that f_1 ∼ p^-1/2, f_2 ∼ p^1/2, f_3 ∼ p^-1/2, we can conclude 𝒥_00 - 𝒱_00∼ (/a_0)^2 for polarizability estimation, 𝒥_jj - 𝒱_jj∼ (/a_0)^4 for position estimation (j=1,2,3), and 𝒥_03 - 𝒱_03∼ (/a_0)^3 for the non-zero off-diagonal term. Here, is the characteristic wavelength of the incident light pulse. § FAR FIELD AND SCATTERING CROSS SECTION Here, we derive the far-field scattering amplitudes FarFieldAmplitude in the main text, as well as the associated scattering cross section and total number of scattered photons N^ sc. The latter serves as a normalization constant in the quantum and classical Cramér-Rao bounds on the estimation precision of the scatterer's parameters. We follow along the lines of the full scattering calculation in Supplementary Section <ref>. Rather than integrating the Heisenberg equations of motion to a finite time t as in (<ref>) and (<ref>), we now integrate to t →∞. Specifically, we let t →∞ in the scattering contribution to the coherent amplitudes (<ref>): α_^(2)(t) = C_∫_t_0^∞ dt' e^-icp(t-t')∫_t_0^t' dt”(e^-iω_0(t'-t”) - c.c.) ∑_[ C_α_^ in * e^ick t” - h.c. ]. Again, let h(t”) be the integrand function as defined in (<ref>), which due to (<ref>) describes a finite-time pulse at the scatterer position. Analogously to the auxiliary limit construction (<ref>), we now claim ζ = lim_η→ 0lim_t_0→ - ∞∫_t_0^t' dt” e^-η |t”| h(t”), with uniform convergence in η>0 over t' ∈ℝ. Notice the absolute value |t”| in the exponent, which differs from the previous claim (<ref>). Nevertheless, the proof proceeds similarly to (<ref>) by means of the Cauchy-Schwartz and the triangle inequality: | ζ - ∫_-∞^t' dt” e^η t” h(t”) | ≤| ζ - ∫_T^t' dt” h(t”) | + | ∫_-∞^T dt” e^-η |t”| h(t”) | + | ∫_T^t' dt” (1-e^-η |t”|) h(t”) | ≤| ζ - ∫_T^t' dt” h(t”) | + | ∫_-∞^T dt” h(t”) | + |1-e^-η |T|| | ∫_T^t' dt” h(t”) |, where T<t' is again arbitrary. In the same manner as before, T and η can be chosen such that the left hand side is upper-bounded by ϵ>0, which completes the proof. We now substitute the construction (<ref>) into (<ref>) and perform the integrals. This results in the linear expansion α_^ out≡lim_t→∞α_ (t) e^icpt = ∑_[ u_,^ outα^ in_ + v_,^ outα^ in†_] with the coefficients u_,^ out = √(kp)/L^3(_x ·_) (_x ·_) e^i(-)·_0/[1+(a_0 k)^2][1+(a_0 p)^2][1/p-k-i0^+ - 1/p-k+i0^+_= 2π iδ(p-k)] χ(ck+i0^+) + δ_,, v_,^ out = 0. The response function χ(ω) is defined in (<ref>). The energy condition k=p ensures that all scattered field components remain far red-detuned with respect to the scatterer and the high-frequency cutoff, cp≪ω_0 and pa_0 ≪ 1. Hence, we can set χ(ck+i0^+) = χ(ck), neglect the a_0-terms in the denominator and carry out the continuum limit, which results in the scattering amplitudes FarFieldAmplitude in the main text. For an incident light pulse along z, α_^ in = L^2 α^ in (k) δ_, 1δ_k_x,0δ_k_y,0, with polarization _ = _x and amplitude density per unit area α^ in (k) = α_k_z^ in/L^2, the total number of incident photons per unit area is Φ = (1/L^2) ∑_ |α_^ in|^2 → (L^3/2π) ∫ dk |α^ in (k)|^2. With the above approximations, the scattering amplitudes are simply α_^ out - α_^ in = ip χ (cp) (_x·_) e^i(p_z-)·_0α^ in (p), from which we obtain the total number of scattered photons, N^ sc = ∑_ |α_^ out - α_^ in|^2 →(L/2π)^3 ∫ d^3p | pχ (cp) α^ in (p) e^i(p_z - )·_0|^2 ∑_μ (_x·_)^2 = (L/2π)^3 ∫_0^∞ dp p^4 | χ (cp) α^ in (p)|^2 ∫ d Ω_[1-(_x·_)^2] ≈2/3π^4 |χ_0|^2 Φ≡σ_ totΦ. In the final step, we have used that χ (cp) ≈χ_0 and p≈ for sufficiently narrow-band off-resonant incident light, which results in the total scattering cross section σ_ tot = 2^4|χ_0|^2/3π. Notice that our definition of the response χ_0 corresponds to a polarizability of 2ϵ_0 χ_0 in SI-units, which yields the well-known dipole scattering cross section. § FIELD EXPECTATION VALUES Here we verify the agreement between the phenomenological dipole radiation fields scattered_field in the main text and the expectation values of the physical fields resulting from our quantum scattering model for any distance ρ>0 from the scatterer. To this end, we will evaluate the expectation values of the transverse field variables in the multipolar PZW gauge from the exact time-evolved expressions for the coherent mode amplitudes, as stated in the main text and derived in Supplementary Section <ref>. We will carry out the calculation for a regularized dipole assuming ρ≫ a_0 and perform the point dipole limit a_0 → 0 in the end. The quantum operators of the transverse vector potential _T () and its (gauge-dependent) conjugate () at the position = _0 + of a detector pixel are expanded in terms of the plane-wave mode operators _ in (<ref>). The time-dependent expectation values of the latter are the coherent amplitudes α_ (t) in (<ref>) which, after inserting the expansion coefficients (<ref>) and (<ref>), can be split into an incident amplitude α_^ in e^-icpt and a scattered amplitude, α_^ sc (t) = α_ (t) - α_^ in e^-icpt. For consistency with the phenomenological setting, we shall now assume α_^ in = α^ inδ_,, corresponding to stationary off-resonant illumination by a single mode of wave vector = _z, c < ω, and polarization _ = _x. Hence, the scattered amplitude simplifies to α_^ sc (t) = √( p)/L^3 e^-i·_0_·_x/1+(a_0 p)^2χ_0/1+(a_0 )^2[ α^ in/p- -i0^+ e^i(z_0-ct) - α^ in*/p+ e^-i(z_0-ct)], with the real-valued off-resonant polarizability χ_0 ≡χ (c). Accordingly, the mean transverse vector potential _T () _t splits into the incident _T^ in (,t) =A^ in_x e^i(z - ct) + c.c., with A^ in = α^ in√(ħ/2ϵ_0 c L^3), and the scattering component _T^ sc (,t). In order to obtain the physical fields, we focus our attention on the conjugate, ()_t = ∂_t _T^ in (,t) + Π^ sc (,t), which in the PZW gauge and away from the scatterer represents the negative electric field. The scattering contribution is Π^ sc (,t) = -i√(ħ c/2ϵ_0 L^3)∑_√(p)_ e^i·α_^ sc (t) + c.c. -i c χ_0 /1+(a_0 )^2∫d^3 p/(2π)^3p e^i·/1+(a_0 p)^2[ A^ ine^i(z_0-ct)/p--i0^+ - A^ in*e^-i(z_0-ct)/p+] ∑_μ_ (_·_x) + c.c. = -i c χ_0 /1+(a_0 )^2∫_0^∞d p/2π^2p^3 /1+(a_0 p)^2[ A^ ine^i(z_0-ct)/p--i0^+ - A^ in*e^-i(z_0-ct)/p+] ∫dΩ/4π e^i·∑_μ_ (_·_x)_≡ (pρ) + c.c. =-i c χ_0 /1+(a_0 )^2∫_-∞^∞d p/2π^2p^3 (p) /1+(a_0 p)^2[ A^ ine^i(z_0-ct)/p--i0^+ - A^ in*e^-i(z_0-ct)/p+-i0^+] = -i c χ_0 A^ in/1+(a_0 )^2 e^i(z_0-ct)∫_-∞^∞d p/2π^2p^3 (p) /1+(a_0 p)^21/p--i0^+ + c.c. = -^ sc (,t) . From the third to the fourth line, the complex conjugate is absorbed by extending the p-integral to -∞. Recalling that the _ are two basis vectors orthogonal to _ = /p, we have ∑_μ_ (_x·_) = _x - _ (_·_x). Let us now define the solid angle with respect to the polar axis _ = /ρ and the two azimuthal axes _1,_2, such that _ = cosϑ_ + sinϑ (cosφ_1 + sinφ_2) and _x = cosγ_ + sinγ_1. The angular part of the integral then simplifies as (p) = ∫dΩ/4π e^·[ _x - _ (_·_x)] = ∫dcosϑ d φ/4π e^pρcosϑ[ cosγ (1-cos^2ϑ)_ + sinγ (1-sin^2ϑcos^2φ)_1 ] = (_×_x) ×_sin pρ/pρ + [ _x - 3_ (_x ·_ ) ] pρcos pρ - sin pρ/(pρ)^3 , which assumes a finite value also at p=0. Analogously, the magnetic field is ^ sc(, t) = ∇×_T(, t) = i√(ħ/2ϵ_0 c L^3)∑_√(p)_×_ e^i·α_^ sc (t) + c.c. i χ_0 A^ in/1+(a_0 )^2 e^i(z_0-ct)∫_-∞^∞d p/2π^2p^2 ∇×(p) /1+(a_0 p)^21/p--i0^+ + c.c. The curl of is ∇×(p) = ∫dΩ/4π e^·[ ×_x ] = p∫dcosϑ d φ/4π e^pρcosϑsinγcosϑ _2 = p _×_x sin pρ - pρcos pρ/(pρ)^2. We can now carry out the remaining p-integrals in (<ref>) and (<ref>) with the help of the residue theorem. To this end, we must express sin pρ = (e^ipρ-e^-ipρ)/2i and cos pρ = (e^ipρ+e^-ipρ)/2 in . Since ρ>0, the integration contour must be closed in the complex upper half-plane for the e^ipρ terms and in the lower half-plane for the e^-ipρ terms. Writing i0^+ = iη in terms of an infinitesimal η>0, the integrand has a pole at p=+iη in the upper half-plane, while the regularisation factor contributes two additional poles at p = ± i/a_0. We arrive at ∫_-∞^∞d p/2π^2p^3 (p) /1+(a_0 p)^21/p--i0^+ = ^3/2π[1+(a_0)^2]{(e^iρ+e^-ρ/a_0/( a_0)^2) (_×_x) ×_/ρ. + . [ iρ(e^iρ-ie^-ρ/a_0/ a_0) - (e^iρ-e^-ρ/a_0) ] _x - 3_ (_x ·_ ) /(ρ)^3}, ∫_-∞^∞d p/2π^2p^2 ∇×(p) /1+(a_0 p)^21/p--i0^+ = ^3/2π[1+(a_0)^2][ (e^iρ-e^-ρ/a_0) - iρ(e^iρ-ie^-ρ/a_0/ a_0) ] _×_x/(ρ)^2 Inserting this into (<ref>) and (<ref>) yields explicit expressions for the regularized scattering fields at distances ρ≫ a_0 away from the scatterer: ^ sc (,t) = -Π^ sc(, t) = i c ^4 χ_0 A^ in/2π[1+(a_0 )^2] e^i(ρ+z_0-ct){(e^iρ+e^-ρ/a_0/( a_0)^2)(_×_x) ×_/ρ. + . [ iρ(e^iρ-ie^-ρ/a_0/ a_0) - (e^iρ-e^-ρ/a_0) ] _x - 3_ (_x ·_ ) /(ρ)^3}, ^ sc (,t) = i ^4 χ_0 A^ in/ 2π [1+(a_0 )^2] e^i(z_0-ct){(e^iρ-e^-ρ/a_0) - iρ(e^iρ-ie^-ρ/a_0/ a_0)}_×_x/(ρ)^2 . If we set A^ in≡ -i E^ in/c and go back to the ideal point dipole case, a_0 → 0, we retrieve the phenomenological expressions scattered_field from the main text. § GAUGE RELATIVITY OF THE QUANTUM FISHER INFORMATION In the main text, we evaluated the information content of the quantum state of the transverse light field at a given time t about the dipole scatterer polarizability and position, = (χ_0,_0), as measured by the quantum Fisher information (QFI) matrix 𝒥 (,t). Here we argue that this QFI is in general not invariant under the choice of electromagnetic gauge. However, for a standard dipole detector model, there is a preferred gauge—the multipolar PZW gauge we assume in the main text—in which the state of the transverse field degrees of freedom captures all the detectable information the scatterer transmits to the field. The QFI in this gauge is thus optimal compared to that in other gauges, assuming the same detector model. Unitary invariance of the QFI In order to understand the gauge relativity of the QFI matrix, let us briefly recap its formal definition first. Consider a quantum state (initially ϱ(t_0)) that picks up information about some unknown parameters through its time evolution under a Hamiltonian (), ϱ(t,) = e^-i()(t-t_0)/ħϱ (t_0) e^i()(t-t_0)/ħ. Consider further a quantum measurement, given by a set of POVM operators {_ζ} with outcomes ζ, by which we access the quantum state. The associated likelihood of the measurement outcomes, p(ζ|) = [_ζ^_ζϱ(t,)], encodes the accessible information about , and this information content is quantified in terms of the Fisher information matrix, ℐ_jℓ () = ∑_ζ p(ζ|) ∂ln p(ζ|)/∂θ_j∂ln p(ζ|)/∂θ_ℓ. It sets the Cramér-Rao bound (CRB) on the lowest achievable uncertainty for unbiased estimates of based on asymptotically repetitions of the state preparation and measurement protocol. Optimizing the extractable -information over all possible POVMs results in the QFI matrix 𝒥≥ℐ. It follows immediately that the QFI matrix is unchanged under unitary transformations, ϱ (t,) →ϱ (t,) ^, which could be used to change the frame or representation of the quantum system. Crucially, this assumes that the unitaries themselves do not depend on the parameters to be estimated. Gauge transformations Here, the quantum system is a dipole scatterer (modeled as a harmonic oscillator) interacting with the electromagnetic radiation field, and the initial state at t_0 → -∞ describes an incident coherent pulse of probe light and the scatterer in its ground state. However, the exact representation of the quantum state ϱ_g and the Hamiltonian _g() of scatterer and field depends on the chosen electromagnetic gauge g <cit.>. One typically starts with the minimal coupling Hamiltonian in the Coulomb gauge g' and then switches to a more convenient gauge g by means of a unitary gauge fixing transformation _gg'. The quantum state transforms as ϱ_g'→ϱ_g = _gg'ϱ_g'_gg'^. Gauge relativity of the QFI in our setting can be attributed to two problems. Firstly, the most expedient gauges in the case of a dipole scatterer depend on its position _0. In particular, the multipolar PZW gauge, for which the scatterer-field interaction reduces to the (regularized) textbook form (<ref>) of a dipole Hamiltonian, is fixed by _gg' = exp[ - i/ħ1/L^3∑_q _e·_T/1+(a_0k)^2 e^-i·_0] . Since it is explicitly determined by the parameters _0 that we seek to estimate, we cannot expect the same QFI for ϱ_g and ϱ_g'. Secondly, we have no direct access to the state of the scatterer here, but only to the field through photodetection. Hence, the relevant QFI in our setting is that of the reduced state of the (transverse) field degrees of freedom. Given that gauge fixing transformations, whether they depend on _0 or not, may correlate and exchange information between the scatterer and the field, the QFI of the reduced field state may change, too. PZW versus Coulomb gauge As an illustration of the gauge relativity, we compare the PZW gauge employed in this work with the Coulomb gauge. Once a gauge g is fixed, the transverse field excitations are quantized by expanding the (gauge-invariant) transverse vector potential _T () and its (gauge-variant) canonical conjugate _g() in a chosen mode basis and taking the expansion coefficients as the 'position' and 'momentum' quadratures. Here, we quantize the free-space field in the basis of plane waves, according to the rule (<ref>) in the PZW gauge, and the resulting photon degrees of freedom are represented by the ladder operators _,g = √(ϵ_0/2ħ)( √(ck)_T + i/√( ck)Π̂_,g) ·_, _T = ∫_L^3d^3 x/√(L^3)_T () e^-i·, _,g = ∫_L^3d^3 x/√(L^3)_g () e^-i·. However, the meaning of a photon differs from gauge to gauge. The photon annihilation operator _≡_,g we are using in the PZW gauge transforms under (<ref>) back into the Coulomb gauge as _' = _gg'^__gg' = _ - i q _e ·_/√(2_0 ħ c k L^3)e^-i·_0/1+(ka_0)^2 = _ - i d_0 _x ·_/√(2_0 ħ c k L^3)e^-i·_0/1+(ka_0)^2 ( + ^). This operator represents the same expectation value, [_' ϱ_g'] = [_ϱ_g], but it now acts on both the field and the scatterer subsystem. On the other hand, the average coherent amplitude for the respective Coulomb-gauge photon mode would be α_' ≡ [ _ϱ_g'] = α_ - i d_0 _x ·_/√(2_0 ħ c k L^3)e^-i·_0/1+(ka_0)^2 (β + β^*), where α_ and β are the coherent amplitudes of the field mode and the scatterer in PZW gauge. The Coulomb-gauge amplitudes are also linear combinations of the incident α_^' in = α_^ in. By expanding the operators in (<ref>) according to (<ref>), and with help of identity (<ref>), we find that their expansion coefficients can be given in terms of the PZW-gauge coefficients simply by u_,' = -(k/p)u_, and v_,' = -(k/p)v_,. Similarly, we can use the transformation rule (<ref>) to calculate the covariance matrix blocks Ξ_g',Υ_g' of the field degrees of freedom in the Coulomb gauge, as well as the derivatives with respect to the parameters as in Supplementary Section <ref>. This allows us to re-evaluate for our scattering problem the QFI in the reduced field state, as seen from the Coulomb gauge. We do not repeat the full calculation, since it proceeds along the same lines as (<ref>)-(<ref>). We simply state the relevant frequency integrals f_1,2,3', which differ from the f_1,2,3 in (<ref>) by a factor k/p inside the k-integral, and by an overall sign: f_1'(p,t) = p^-1/2/1+(a_0p)^2∫_0^∞dk/2πk^5/2/1+(a_0k)^2[ -χ( ck-i0^+) α^ in *(k,t)/k+p + χ( ck+i0^+)α^ in(k,t)/k-p+i0^+], f_2'(p,t) = p^1/2/1+(a_0p)^2∫_0^∞dk/2πk^3/2/1+(a_0k)^2[ -χ( ck-i0^+) α^ in *(k,t)/k+p - χ( ck+i0^+) α^ in(k,t)/k-p+i0^+], f_3'(p,t) = p^-1/2/1+(a_0p)^2∫_0^∞dk/2πk^3/2/1+(a_0k)^2[ -∂χ(ck-i0^+)/∂χ_0α^ in *(k,t)/k+p - ∂χ(ck+i0^+)/∂χ_0α^ in(k,t)/k-p+i0^+]. Figure <ref> plots an exemplary comparison of the QFI matrix elements in the PZW (purple line) and the Coulomb gauge (green) as a function of time, associated to the parameters (a) θ_1 = x_0 and (b) θ_0 = χ_0. The purple lines match those of Fig. fig:QFI in the main text, which uses the same settings. In both gauges, the QFI oscillates twice per optical cycle. While the overall buildup over time can be observed in both gauges, with the same asymptotic far-field value, the transient near-field values in the PZW gauge clearly exceed those of the Coulomb gauge. In Figure <ref>, we plot the corresponding peak values of the QFI when the probe pulse hits the scatterer at t=0, as a function of the scatterer size a_0. In the PZW gauge, the peak QFI for (a) x_0 and (b) χ_0 diverges like the fourth and the second power of /a_0, respectively. In the Coulomb gauge, on the other hand, we find a divergence with only the second power in (a), and a saturation in (b). Clearly, the transverse field degrees of freedom in the PZW gauge learn more about the scatterer than in the Coulomb gauge. In the following, we will argue why the quantum state of the transverse field in the PZW gauge carries the most information about the scatterer to a photodetector, as compared to the Coulomb or other intermediate gauges <cit.>. Scatterer-detector interaction in the PZW gauge Ultimately, physical observables must always be gauge-invariant, and so must be parameter estimation based on them. This calls for a physical model for photodetection, determining which exact POVM measurement it represents in a given gauge. (Working out a realistic detector model is beyond the scope of this work.) At the same time, the purpose of the QFI is to provide an upper bound on how much information about certain parameters can at best be extracted from a quantum state by any measurement. We now show that the PZW gauge should give the greatest upper bound on the information that can possibly be extracted from photodetectors—when they are also modeled in the usual manner as (regularized) dipoles. To this end, consider multiple dipoles described by bound quantum charges q_c≡ q with canonical coordinates (_c,_c), oscillating around the respective equilibrium positions _0,c. We shall label by c=0 our scatterer of interest, (_0,_0) ≡ (_e,_e) and _0,0≡_0, while c>0 represent the detector dipoles. The full Hamiltonian of all these dipoles and the field in the PZW-gauge is then the multiparticle equivalent of (<ref>): Ĥ_ tot = ∑_c {[_c - q(_c+_0,c)]^2/2m + U_c(_c) } + V_ self + q^2/4π_0∑_c<d[ _c·_d/ρ_cd^3 - 3(_c·_cd)(_d·_cd)/ρ_cd^5] + _0/2∫ d^3 x [ ( + 1/_0_T )^2 + c^2(∇×_T)^2 ]. Here, V_ self subsumes all (infinite) self-interaction terms. The U_c in the first line represent the individual trapping potentials of each bound charge while the second line corresponds to the static dipole-dipole interaction between pairs of them, denoting the distance as _cd = _0,c-_0,d. The third line is the contribution of the transverse field, which includes the dipole-field coupling through the transverse polarization. In the regularized dipole approximation analogous to (<ref>), using the same length scale a_0 for all dipoles, the Fourier components of the transverse polarization read as _T = -q/1+(a_0k)^2∑_c e^-i·_0,c∑__ (_·_c). Consistently, we can also approximate (_c+_0,c) ≈ 0, reducing the Hamiltonian to Ĥ_ tot = ∑_c Ĥ_c + V_ self + Ĥ_I + _0/2∫ d^3 x [^2 + c^2 (∇×_T)^2], where _c = _c^2/2m + U_c(_c). All dipole-dipole and dipole-field interaction terms are subsumed under Ĥ_I = q^2/4π_0∑_c<d[ _c·_d/ρ_cd^3 - 3(_c·_cd)(_d·_cd)/ρ_cd^5] + 1/2∫ d^3 x [ 2_T · + 1/_0_T^2 ]. In the second line, we have the coupling between the dipoles and the transverse field quadrature as well as another dipole-dipole coupling term. Using Parseval's identity to express the latter term in Fourier space in the continuum limit, 1/_0∫ d^3 x _T^2() = 1/_0 L^3∑_∑_i=1^3 |_i·_T|^2 1/_0∫d^3k/(2π)^3∑_i=1^3 |_i·_T|^2 we can plug in (<ref>) to get ∑_i=1^3 |_i·_T|^2 = q^2∑_i=1^3 ∑_c,d e^i·_cd ×∑_,μ=1,2(_i·_)(_i·_μ)(_c·_)(_d·_μ)/[1+(k a_0)^2]^2 = q^2 ∑_c,d e^i·_cd∑_=1,2(_c·_)(_d·_)/[1+(k a_0)^2]^2. Recall that the _ are two field polarization unit vectors orthogonal to . We now bring in the assumption that the extent a_0 of the dipoles is much smaller than the distances ρ_c≠ d between them. Therefore, the exponential e^i·_c≠ d oscillates very rapidly in the relevant k-values at which the regularizing term k a_0 becomes appreciable, and we can neglect the latter. Substituting back into (<ref>) and omitting the self-interaction terms c=d, we obtain 1/2_0∫ d^3 x P̂_T^2() = q^2/_0∑_c<d∫d^3k/(2π)^3 e^i·_cd∑_=1,2 (_c·_)(_d·_) = -q^2/4π_0∑_c<d[ _c·_d/ρ_cd^3 - 3(_c·_cd)(_d·_cd)/ρ_cd^5] . This is exactly the negative of the dipole-dipole potential between the bound charges c and d, and cancels with the first line in (<ref>). We are left with an interaction solely mediated by the transverse field, Ĥ_I = ∫ d^3x ()·_T(), which we can evaluate by expanding in plane-wave modes and modeling the charges as harmonic oscillators as we did in Supplementary Section <ref>. The result proves that there is (to a good approximation) no direct dipole-dipole interaction between the scatterer and the detector in the PZW gauge—a distinguishing feature compared to the Coulomb or other intermediate gauges. All the information that the scatterer broadcasts into its surroundings is transmitted to the detector dipoles via the transverse field, and thus captured by the QFI of the reduced field state in this gauge. (In the Coulomb gauge, the longitudinal field carries part of the near-field information, thus depleting the QFI of the transverse field state.) We remark that, if the assumption ρ_cd≫ a_0 does not hold, we cannot approximate the regularizing denominator in (<ref>) by unity, and (<ref>) is no longer valid. Hence, the QFI of the reduced field state only characterizes measurements made by detectors that do not overlap with the scatterer region of radius a_0. That is to say, detection schemes in such close vicinity are not subject to the quantum Cramér-Rao bound evaluated here.
http://arxiv.org/abs/2307.01513v1
20230704064925
Automated design of relocation rules for minimising energy consumption in the container relocation problem
[ "Marko Đurasević", "Mateja Đumić", "Rebeka Čorić", "Francisco Javier Gil-Gala" ]
cs.NE
[ "cs.NE" ]
University of Zagreb, Faculty of Electrical Engineering and Computing Zagreb Croatia [email protected] Department of Mathematics, Josip Juraj Strossmayer University of Osijek Osijek Croatia [email protected] Department of Mathematics, Josip Juraj Strossmayer University of Osijek Osijek Croatia [email protected] University of Oviedo. Department of Computing Gijón Spain [email protected] The container relocation problem is a combinatorial optimisation problem aimed at finding a sequence of container relocations to retrieve all containers in a predetermined order by minimising a given objective. Relocation rules (RRs), which consist of a priority function and relocation scheme, are heuristics commonly used for solving the mentioned problem due to their flexibility and efficiency. Recently, in many real-world problems it is becoming increasingly important to consider energy consumption. However, for this variant no RRs exist and would need to be designed manually. One possibility to circumvent this issue is by applying hyperheuristics to automatically design new RRs. In this study we use genetic programming to obtain priority functions used in RRs whose goal is to minimise energy consumption. We compare the proposed approach with a genetic algorithm from the literature used to design the priority function. The results obtained demonstrate that the RRs designed by genetic programming achieve the best performance. Automated design of relocation rules for minimising energy consumption in the container relocation problem Francisco J. Gil-Gala ========================================================================================================== § INTRODUCTION The container relocation problem (CRP), is a combinatorial optimisation problem with applications in warehouse and yard management <cit.>. Due to the limited space, containers are usually stacked one atop another and/or side by side. This way blocks are formed that have stacks (width), a number of tiers (height), and a number of bays (length). The objective is to retrieve and load all containers from the yard in a predetermined order. However, a container can be retrieved only if it is located on the top of its stack. If there are containers on top of the one that needs to be retrieved, they first need to be relocated to other stacks. During the years, many heuristics and metaheuristics were proposed to solve this problem <cit.>. These methods are computationally expensive and require a substantial amount of time to obtain solutions for larger problem sizes. Therefore, simple heuristic methods, called relocation rules (RRs), are proposed in the literature to solve CRP <cit.>. RRs construct the solution incrementally by determining which relocation should be performed based on the current system information. For that purpose, RRs use a priority function (PF) to rank all possible relocations and select the best one. Since manually designing such PFs is difficult, certain studies investigated the possibility of automatically designing them <cit.>. Due to the growing environmental concerns that arise today, optimising energy related criteria is becoming increasingly important in various optimisation problems, such as vehicle routing <cit.> or various scheduling problems <cit.>. However, in CRP the energy consumption criterion did not receive much attention, with only a few studies focusing on optimising it either directly <cit.>, or as a part of the total cost objective <cit.>. Thus, there is a lack of RRs that could be used to efficiently optimise this criterion. To close this gap, we examine the application of hyperheuristics to generate RRs appropriate for optimising the total energy consumption during the retrieval process of containers. Although this problem was tackled in <cit.>, the authors manually defined the mathematical expression of the priority function used to rank all the relocations, and used a GA to optimise certain parameters in that expression. As such, the approach is limited in a sense that the structure of the priority function still needs to be defined manually. Therefore, we revisit this problem and apply GP as a hyperheuristic to generate PFs of an arbitrary structure. We use the same information as the authors in <cit.> to make a fair comparison between the methods, and show that PFs designed by GP construct significantly better solutions than the previously proposed GA. The contributions of this paper can be outlined as follows: * develop a GP based hyperheuristic method to optimise the total energy consumption for CRP; * compare priority functions for relocation rules evolved by GA and GP. § CRP PROBLEM DESCRIPTION We consider the single bay CRP in which the bay consists of S stacks with H tiers. Every stack has a height denoted with h(S) that has to be less or equal to the maximum height H. There are C containers in the bay and a single crane that can move one container at a time. Each container has a different priority, which denotes the order of their retrieval from the yard. To solve the problem, two types of operations can be performed by the gantry crane, relocation and retrieval. Relocation moves a container from the top of one stack to another, which can be done only if the stack to which the relocation is being made has a height smaller than H. The second operation, retrieval, picks a container from the top of the stack and moves it to the truck used for loading, which is located at position 0 denoting the beginning of the bay. Each container in the bay has an ID that determines in which order they need to be retrieved. The container with the smallest ID in the yard is the one that needs to be retrieved next, and is called the target container. If the target container is not located at the top of its stack, it is required to relocate all the containers above it to different stacks. The stack from which the container is moved is called the origin stack, whereas the stack to which the container is moved is called the destination stack. All relocation and retrieval sequences that guarantee the crane can retrieve every container in a predetermined order denote feasible CRP solutions. The goal is to find a sequence that minimises a given objective. In this study we optimise the total energy consumed while retrieving all containers from the yard, which can be defined as <cit.>: TEC=∑_m=1^MW_m(h*h_m+l*l_m+x*x_m), where: * h (l) – energy consumed per ton for one tier hoisted (lowered) * x – energy consumed per ton when moving the crane stack * h_m (l_m) – tiers hoisted (lowered) during move m * x_m – stacks crossed during move m * W_m – moving weight of move m; W_m=W_s+W_c, where W_s denotes the weight of the crane, and W_c denotes the weight of the container moves (equals to 0 if crane was empty) * M – number of moves required to retrieve all containers. Based on <cit.>, values for h, l, x are set to 0.9, 0.02 and 0.08 respectively, while crane weight W_s is equal to 0.5 tons. § METHODOLOGY §.§ Relocation rules Relocation rules (RRs) represent simple constructive heuristics that iteratively build the solution to CRP. They consist of two parts - the relocation scheme (RS) and the priority function (PF) <cit.>. RS takes care of problem constraints and creates a plan for container retrieval and relocation. If the container that needs to be retrieved next is on top of its stack, it is retrieved, otherwise, the containers above it must be moved to another stack to allow retrieval. RS uses PF to decide which stacks the containers above the target container should be moved to in order to relieve the target containers. This is done iteratively, one container at a time. RS determines the container that needs to be moved next, and PF assigns a numeric value to each stack to which the given container can be moved. Depending on PF, the container is moved to the stack for which the best value was determined. RSs are simple algorithms that are defined manually. Based on the moves that are allowed we distinguish between the restricted and the unrestricted RS. In the restricted version, only containers located above the target container may be moved, while in the unrestricted version, there is no such restriction, i.e., all containers located on top of their stack may be moved. §.§ Using GA and GP for developing PFs Designing a good PF manually is a challenging task, because of which several attempts to automate this process were performed. Partial automation was done in <cit.>, in which the authors manually defined a general expression with a certain number of free parameters that were optimised with GA. In a more recent work the entire PF was developed using GP <cit.>, which achieved significantly better performance than several existing manually designed PFs. In this work, we consider both ways to design PFs to optimise the total energy consumption for CRP. Both GA and GP use the same evolutionary scheme, with the the main difference being the representation of individuals. The GA uses a list of floating point numbers denoting the parameters it optimises, while GP uses the standard expression tree representation. The evaluation is done using a fitness function that evaluates each individual on a set of problems and assigns a numerical value (fitness) to the individual. In each iteration, a 3-tournament selection is used, the two better of the selected individuals are used for crossover, and the worst is replaced by a newly created individual to which the mutation operator is applied with a certain probability. This is repeated until the maximum number of fitness function evaluations is reached. In <cit.>, the authors propose the global retrieval heuristic (GRH) for restricted CRP with container weights to optimise total energy consumption. In this study, we adapt GRH to also work with the unrestricted RS to test whether this can improve the results. When deciding where to move the container, GRH uses a penalty function and selects the stack that received the lowest value. The penalty function is given by expression (<ref>), and the description of the variables can be found in the Tables <ref> and <ref>. Table <ref> outlines the variables that the algorithm uses as inputs, while Table <ref> contains the free parameters that must be set before solving the problem and whose values are between 0 and 1. The idea presented in the paper <cit.> is to apply a GA to determine the free parameters from Table <ref>. The GA uses a simple floating-point encoding, where each individual consists of 12 real numbers, each denoting one of the parameters. GP as a hyperheuristic achieves good results in automatic development of scheduling rules <cit.> and has been successfully applied to the basic CRP problem <cit.>, in which the total number of relocations and crane operation time were optimised. Encouraged by this, in this paper we apply GP to generate PFs to minimise the total energy consumption in CRP. To analyse how GP compares to the GA approach of <cit.>, GP uses the same system information to construct the PF. This means that the terminal set of GP comprises of the variables given in Table <ref> (except A_1, A_3 and A_4). The set of functions used in the development of the penalty function consists of addition, subtraction, multiplication and protected division (returns 1 if the divisor is close to 0). Penalty_s = α(h_s/mxHeight)^A_1 + β(l_s/mxHeight)^A_1 +γ(x_s/S)^A_1 + δ r_s (W_c/W_max)^10P_1 + ϵ r_s (c-t_s/C)^A_3 + η(1-r_s)g_s + θ(k_s/S)^A_4 + μ(n_s/mxHeight) § EXPERIMENTAL SETUP To test the performance of GRH and GP evolved PFs for RRs, the Caserta <cit.> and Zhu <cit.> datasets are used. These original instances are used as the test set, whereas additional instances were generated to be used for training GP and GRH. In order to be able to use the original instances from these two sets with energy criteria, an additional weight with an uniform distribution from 1 to 30 was generated for each container, as was done in <cit.>. The adapted problem instances can be obtained from <http://www.zemris.fer.hr/ idurasevic/CRP/CRP.7z>. Both GP and GA use a population of 1 000 individuals, mutation probability of 0.3 for the restricted and 0.1 for the unrestricted RRs, and 50 000 function evaluations. The maximum tree depth was set to 5 in GP. GP used the subtree, uniform, context preserving, size fair, and one point crossover operators, as well as the subtree, hoist, node complement, node replacement, permutation, and shrink mutation operators <cit.>. The GA used several well known genetic operators, like arithmetic, SBX, BLX-α, and others. For mutation, the uniform mutation operator is used, which generates a random number from the interval [0,1]. In cases when several crossover or mutation operators are defined, a random one is selected and applied for each time the operator needs to be invoked. To obtain a notion on the performance of the algorithms, GP and GA were executed 30 times to evolve RRs using the training set. The best RR obtained in each execution is evaluated on the test set and the total consumed energy for each of these 30 rules is determined. To test whether the obtained results are statistically significant, the Kruskal-Wallis test with the Dunn post hoc test and Bonferroni correction method was used. The obtained differences are considered significant if a p-value below 0.05 was obtained. § RESULTS Figure <ref> outlines the results obtained for RRs generated by GP and GRH. By comparing the two methods used to automatically design RRs, we see that GP evolved PFs consistently achieve a better minimum and median values of the results on both datasets. The first thing to notice is that the restricted versions (marked in the figure with -R next to the name of the approach) of the RRs consistently perform better than their unrestricted variants (marked in the figure with -U next to the name of the approach). The reason why this happens is that the unrestricted version introduces additional moves that are performed, which ultimately increases the total consumed energy. As we see, this increase is quite substantial and therefore leads to significant deterioration of the results. Therefore, we can conclude that for this optimisation criteria the unrestricted version of RRs is not appropriate. If we compare the rules obtained with GP and the GRH, we see that the rules generated by GP perform better in almost all cases. This is most evident in the restricted variant, where even the worst solution obtained with GP outperforms the best solution obtained with GRH. The only discernible advantage of GRH is that the results are less dispersed than with GP. However, as these results are generally worse, this has no obvious advantage. On average, the results obtained with GP are about 5% better than those obtained with GRH. The statistical tests showed that restricted RRs evolved by GP perform significantly better than all other RR variants, thus confirming its superiority. Furthermore, the restricted RR variants always perform significantly better than their unrestricted counterparts, proving that the restricted variants are preferable for optimising this criterion. Finally, GP evolved RRs always perform better than GA evolved rules, except in one case in the Zhu dataset, where only GP-U and GRH-R perform equally well. Based on these results, we can conclude that GP is more appropriate for designing new RRs, compared to GA coupled with GRH. § CONCLUSIONS AND FUTURE WORK In this paper, the application of GP to automatically generate RRs for solving CRP with the aim of minimising total energy consumption was investigated. The method was tested on an extensive set of experiments and compared with GRH, where the parameters for RR are adjusted using a GA. The experimental results show that RRs designed using GP significantly outperform those designed using GRH. This shows the versatility of GP in obtaining high quality RRs for CRPs with non-standard criteria, especially considering that it uses the same system properties as GRH. Thus, we see that the additional flexibility of GP to freely design the expression of PF gives it the ability to obtain better solutions than rules where the structure is manually defined and only the corresponding coefficients are optimised. Based on these results, we conclude that GP can be used to generate effective RRs for new problem variants of CRP. In the future work we will propose several new terminal nodes for minimising total energy consumption. Furthermore, it is planned to optimise the energy consumption in a multi-objective scenario with other criteria like the total number of relocations. Finally, the model considering energy minimisation will be extended to other CRP variants including multiple bays and duplicate containers. This research has been supported by the Croatian Science Foundation under project IP-2019-04-4333 and the Spanish State Agency for Research (AEI) under research project PID2019-106263RB-I00. ACM-Reference-Format
http://arxiv.org/abs/2307.03375v1
20230707035558
Chiral odd Chern number lattice supersolidity with tunable unpaired Majorana fermions in a Rydberg-dressed Fermi gas
[ "Shuai Li", "Rui Tian", "Min Liu", "Maksims Arzamasovs", "Bo Liu" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter,Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, Xi'an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter,Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, Xi'an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter,Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, Xi'an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter,Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, Xi'an 710049, China [email protected] Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter,Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi'an Jiaotong University, Xi'an 710049, China There is growing interest to search the chiral Majorana fermions that could arise as the quasi-particle edge state of a two-dimensional topological state of matter. Here we propose a new platform, i.e., a two-dimensional chiral odd Chern number lattice supersolid state, for supporting multiple number-tunable chiral Majorana fermions from a single component Rydberg-dressed Fermi gas in an optical lattice. The attractiveness of our idea rests on the fact that by introducing the unveiled competition between two distinct length scales, i.e., lattice period and the distance of resonant Rydberg-dressing, can provide a new way to manipulate the spatial dependence of both the strength and sign of the effective Rydberg-dressed interaction. Such a designed effective interaction turns out, can induce an unveiled odd Chern number lattice supersolid state, which is confirmed by both the mean-field and Monte Carlo calculations. Furthermore, we also find that the spontaneously formed density modulation resulted from the discrete translational symmetry breaking provides a natural way of tuning the system's topology arising from the superfluidity induced by the U(1) symmetry breaking. It thus provide an alternative way for manipulating the chiral Majorana fermions, which would be useful in topological quantum computation. Chiral odd Chern number lattice supersolidity with tunable unpaired Majorana fermions in a Rydberg-dressed Fermi gas Bo Liu Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Pursuit of chiral Majorana fermions (CMFs) has attracted intensive interests in recent years  <cit.> . The non-Abelian braiding of CMFs is considered as the basic building block for fault tolerant topological quantum computations  <cit.>. So far, several systems were proposed to realize CMFs. One example of hosting the chiral Majorana fermion mode (CMFM) is the 2D topological superconductor, like the p_x+ip_y superconductivity in the liquid ^3He  <cit.> and strontium ruthenates  <cit.>, which are in the same universality class as fractional quantum Hall states <cit.>. However, the fate of 2D topological superconductivity in electronic matter remains debatable. In the field of ultracold atoms, this phase was predicted to appear via manipulating p-wave interactions (or equivalent one), such as utilizing the p-wave Feshbach resonance, artificial spin-orbit coupling or dipolar interactions  <cit.>. But, the experimental challenges in the above proposals, such as three-body loss, heating problem or ultracold chemical reactions, are still desired to future breakthroughs. Another approach proposed to get around is to hybridize materials of topological and superconducting properties, e.g., semiconductor-superconductor heterostructures, a helical magnetic structure on top of superconductors and topological insulators coupled with superconductors  <cit.>. This approach nevertheless requires advanced material engineering. Here we report the discovery of a new many-body phase, i.e., a chiral odd Chern number lattice supersolid (CLSS) state, which can support the number-tunable CMFM. Distinct from topological superconductors, in our proposed CLSS state, not only is the U(1) symmetry broken, but the discrete translational symmetry is also broken. More interestingly, it is shown that such a discrete translational symmetry breaking induced density modulation provides a new tool, which is missing in topological superconductors, for manipulating the topological nature of CLSS and thus to support the number-tunable CMFM. We shall introduce this with a specific model of Rydberg-dressed Fermi atoms in an optical lattice, to be described below. Recently, the research of Rydberg atoms and Rydberg-dressed atoms has evolved rapidly  <cit.>, where an effective Rydberg-dressed interaction (RDI) shows high controllability and thus haven been recognized for their potential in quantum simulation and quantum information  <cit.>. Lots of interesting many-body phases induced by the RDI, such as a supersolid droplet phase, a bright soliton, a topological superfluid and topological density waves, have been predicted  <cit.>. Distinct from previous studies, the new idea here is to utilize the competition between two different length scales, i.e., the period of an optical lattice and the distance of resonant Rydberg dressing  <cit.>, as a new tool to manipulate the RDI, which is motivated by the recent experimental advances in Rydberg-dressed atoms in optical lattices  <cit.>. Interestingly, it is shown that both the interaction strength and sign of RDI can be engineered to be spatially dependent. And such a designed RDI can induce an unveiled CLSS state. Effective model 0.01mm— Let us consider a single-species Fermi gas held in a 2D square optical lattice, where atoms are coupled to their Rydberg states through the double Rydberg dressing scheme  <cit.> to generate an effective RDI. Here the ground state atom is simultaneously coupled to two Rydberg states by applying one blue-detuned and one red-detuned lasers together <cit.>. Through tuning the Rabi frequency and detuning of the off-resonant light, the RDI between dressed-state atoms can be captured by the following form  <cit.>: V(r)=U_1(r)+U_2(r), where U_j(r)=C̃_6^(j)/(r^6∓R̃_j) with j=1,2 describes the distinct RDI induced by the coupling to different Rydberg state |R̃ _j⟩. C̃_6^(j)=R̃_j^6Ω _j^4/8|Δ _j| ^3 is the interaction strength, where the averaged soft-core radius R̃_j=(C_6^(j)/2|Δ _j| )^1/6 and C_6^(j)>0 denotes the van der Waals (vdW) interaction strength of Rydberg state |R̃ _j⟩, which is assumed to be positive in this work. Ω _j and Δ _j stand for the corresponding Rabi frequency and detuning, respectively. The plus and minus signs refer to the red- and blue- detuned lasers, respectively. When the lattice depth is large enough, the above system can be described by the following Fermi-Hubbard model in the tight-binding regime 𝐇 = -∑_<i,j>t(c_i^†c_j+h.c.)-μ∑_ic_i^†c_i +1/2∑_i≠ jV_i-jc_i^†c_j^†c_jc_i, where t is the hopping amplitude describing tunneling in the 2D plane. i≡ (i_x,i_y) is the site index denoting the lattice site 𝐑_i≡ (ai_x,ai_y) with a being the lattice constant. μ is the chemical potential. The RDI is given by V_i-j=V(𝐑_i-𝐑_j). The attractiveness of our idea rests on the fact that through simultaneously tuning the lattice constant, Rabi frequency and detuning, both the interaction strength and sign of RDI can be engineered to be spatially dependent in 2D plane. In the double Rydberg dressing scheme, there is a critical distance R_res determined by the relation 2Δ_1+C_6^(1)/R^6_res=0, at which Rydberg atom pairs are resonantly excited  <cit.>. At the same time, another length scale is determined by the lattice constant. Interestingly, the competition of the above two distinct length scales can result in unusual effects on the RDI. For instance, here we consider tuning the two length scales in the regime R_res/√(5)<a<R_res/2, where the RDI shows following unveiled features. It is found that when varying the inter-particle distance in optical lattices, the sign of RDI becomes highly tunable, i.e., (i) when |𝐑_i-𝐑_j|<R_res, the RDI is attractive; (ii) when |𝐑_i-𝐑_j|>R_res, the RDI is repulsive (assuming Ω_1^4/|Δ_1| ^3>Ω_2^4/|Δ_2| ^3). Therefore, the nearest-neighbor V_N, next-nearest-neighbor V_NN and next-next-nearest-neighbor V_NNN interaction in Eq. (1) are attractive, while other long-range interactions are repulsive. More interestingly, it is also shown that the longer range attraction V_NNN and V_NN is engineered to be stronger than the nearest-neighbor attraction V_N. Past studies have shown that when including both attractive V_N and V_NN interactions between lattice fermions, two kinds of many-body phases, i.e., the charge density wave (CDW) and superfluid (SF) phases can appear. However, the CDW phase can only survive in the limit when V_NN≪ V_N< 0 <cit.>. Here, surprisingly, it is shown that our designed spatially-dependent RDI not only frees up that limitation, but also results in the coexistence of CDW and SF and thus induces an interesting lattice-supersolid phase (SS), which is confirmed by both mean-field and Monte Carlo studies in the following. First, under the mean-field approximation, to describe the CDW, we rewrite the density distribution of the system as n_i=n_0+Ccos (𝐐·𝐑_i), where 𝐐 represents the periodicity of density pattern and n_0=∑_i⟨ c_i^†c_i⟩/N_L is the average filling with N_L being total lattice site. Therefore, the CDW order parameter can be defined as δ _±𝐐=V(±𝐐)C with V(𝐤)=∑_n≠ 0V_nexp (-i𝐤·𝐫_n). We also introduce the superfluid pairing order parameter as Δ (𝐤)=1/N_L∑_𝐤' V(𝐤-𝐤')⟨ c_-𝐤'c_𝐤'⟩ and ⟨...⟩ stands for the expectation value in the ground state. Through minimizing the ground state mean-field energy, order parameters defined above can be obtained (see details in Supplementary Materials (SM)). We find that there is a threshold of the interaction strength J for supporting the coexistence of superfluid and CDW orders, for instance, as shown in Fig. <ref>(b). Regarding the CDW order, it is shown that the mean-field ground state energy is minimized at 𝐐 = (π/a ,π/a) (Fig. <ref>(a)), indicating that there is a checkerboard density pattern and the CDW order parameter can be written as δ≡δ_(π/a ,π/a). For the superfluidity, there is a complex superfluid order parameter with odd parity. As shown in Fig. <ref>(c), we apply a fourier series expansion to the superfluid order parameter, i.e., Δ (𝐤)=∑_m,nΔ _m,nsin(mk_xa+nk_ya), and it is found that when J increasing, the dominant component of Δ(𝐤) behaves as Δ [sin(2k_xa) + i sin(2k_ya)], since we find that Δ≡Δ_2,0=-iΔ_0,2. Because the checkerboard CDW order breaks the discrete translational symmetry and the superfluidity breaks the U(1) symmetry, the coexistence of these two orders will lead a SS phase. We thus obtain the zero-temperature phase diagram as shown in Fig. <ref>. When fixing a certain average filling, there is a threshold of interaction strength J separating the SF and SS. Below that threshold, the ground state is a superfluid, where the CDW order vanishes. When further increasing the interaction strength, the superfluid and CDW coexist, indicating that the ground state is a SS phase. To further verify the existence of CDW and superfluid orders, we have performed a variational Monte Carlo (VMC) calculation on a 12× 12 lattice system with periodic boundary condition <cit.>. Regarding the superfluid order in the ground state, we study the pairing correlation through the VMC method. For instance, considering the dominant pairing component Δ [sin(2k_xa) + i sin(2k_ya)], the correlation can be defined as P(𝐑)=1/2N_L∑_𝐑_i⟨Δ ^†(𝐑_i)Δ (𝐑_i+𝐑)+Δ ( 𝐑_i)Δ ^†(𝐑_i+𝐑)⟩ , with Δ (𝐑_i)≡ c_ic_i+2e_x-c_ic_i-2e_x+i(c_ic_i+2e_y-c_ic_i-2e_y). 𝐑 is an 2D vector in the xy-plane. As shown in Fig. <ref>(a), the long-ranged saturation behavior of the pairing correlation P(𝐑) indicates the existence of superfluid pairing order in the ground state. While to verify the existence of CDW , we calculate the density structure factor defined as S(𝐐)=1/N_L^2∑_i,j⟨ c_i^†c_ic_j^†c_j⟩ e^i𝐐· (𝐑_𝐢-𝐑_𝐣). The peak in density structure factor provides information on the CDW order. As shown in Fig. <ref>(b), when J beyond the threshold, the structure factor S(𝐐) is peaked at (π /a,π /a), indicating the existence of a checkerboard density pattern in the ground state, which is consistent with our mean-field calculations as shown in Fig. <ref>. Chiral odd Chern number lattice supersolids 0.01mm— In the following, we will study the topological nature of the SS phase. As shown in Fig. <ref>, there are three topologically distinct SS phases. One topological trivial region and two topologically non-trivial regions can be distinguished by the Chern number C =i/2π∑_E_n<0∫_dk_xdk_y(⟨∂ _k_yϕ _n(k)|∂ _k_xϕ _n(k)⟩ -⟨∂ _k_xϕ _n(k)|∂ _k_yϕ _n(k )⟩ ), where ϕ _n(k) is the eigenstate with energy E_n of Eq. (1) under the mean-field approximation. We find that the topological trivial region SS-I phase is characterized with the zero Chern number. While the two topological regions, SS-II and SS-III, are featured by the non-zero Chern number. More interestingly, we find that both SS-III and SS-II are characterized with an odd Chern number, i.e., C=1 and C=3, respectively, which can support unpaired CMFM, to be shown below. To gain more insight into the topological property of the system, we have applied a series of unitary transformations (see details in SM) to reform the BdG Hamiltonian in a much clearer way as ℋ_BdG≡( [ H_SF^^' 0_2× 2; 0_2× 2 H_SF^'' ] ), with H_SF^^'=( [ √(ξ _𝐤^2+δ^2)-μ Δ (𝐤); Δ ^∗(𝐤) -√(ξ _𝐤^2+δ^2)+ μ ] ) and H_SF^''=( [ -√(ξ _𝐤^2+δ^2)-μ Δ (𝐤); Δ ^∗(𝐤) √(ξ _𝐤^2+δ^2)+ μ ] ). Here, to simplify the analysis, we take the dominant component of superfluid order and Δ(𝐤) in Eq. (<ref>) is approximated as Δ [sin(2k_xa) + i sin(2k_ya)]. Then, from Eq. (<ref>), we can understand the topology of the system. First, we find that H_SF^'' is always typologically trivial when considering μ>0, which is identified by the vanished Chern number. Second, we also find that there are three distinct topological regions for H^': (i) μ >√(δ^2+16t^2) or 0<μ <δ, H_SF^' are engineered in the topological trival region with Chern number C=0; (ii) √(δ^2+4t^2)<μ <√(δ^2+16t^2) , H_SF^^' are tuned in topological regions with Chern number C=1; (iii) δ <μ <√(δ^2+4t^2), a topological phase characterized by the Chern number C=3 is achieved. Therefore, distinct topological regions of ℋ_BdG can be engineered by tuning the CDW order. Amazingly, such a scheme can be naturally achieved in our proposed SS phase when varying the interaction strength and the average filling. As shown in Fig. <ref>, in the SS region, there are three distinct topological regions. When the system is close to the half filling, the SS phase is in the topological trivial region, i.e., SS-I phase. When further increasing the average filling, there are two topological phase transitions and the SS phase enters two distinct topological non-trivial regions with two different Chern numbers C=3 and C=1, respectively. Therefore, an unveiled CLSS phase is achieved . Multiple number-tunable chiral Majorana fermions 0.01mm— Since the Chern number counts the number of CMFM at the edge of the system, an odd Chern number corresponds to the unpaired chiral Majorana edge mode <cit.>, which constitutes a non-Abelian phase of matter. Therefore, our proposed CLSS phases with distinct odd Chern numbers can support the number-tunable unpaired chiral Majorana edge modes. To show that a cylinder geometry is chosen in the xy-plane, i.e., considering the open (periodic) boundary conditions along the x(y) directions, respectively. The edge excitations can be obtained (see details in SM). For instance, as shown in Fig. <ref>(a), for SS-II phase, it is shown that all the bulk modes are gapped and there are three pairs of chiral edge states located at two outer edges of the system, because the Chern number of SS-II phase is C=3 satisfying the so-called bulk-edge correspondence. More interestingly, we also find that among these chiral edge modes there are six zero-energy edge states. The wavefunction of that can be expressed as (u_k_y,i_x^0,v_k_y,i_x^0,u_k_y^',i_x^0,v_k_y^',i_x^0)=(U_k_y, i_xe^iθ _k_y,i_x,V_k_y,i_xe^-iθ _k_y ,i_x,U_k_y^',i_xe^iθ _k_y^' ,i_x,V_k_y^',i_xe^-iθ _k_y^' ,i_x), which satisfies u_k_y(k_y^'), i_x^0=v_k_y(k_y^'),i_x^0* on the left edge and u_k_y(k_y^'),i_x^0=-v_k_y(k_y^'),i_x^0* on the right edge, for instance, as shown in Fig. <ref>. Therefore, these six zero-energy eigenstates support three unpaired chiral Majorana fermions localized at each edge of the system. While for SS-III phase, as shown in Fig. <ref>(b), since the Chern number C=1, it is found that there are two zero-energy eigenstates which support one unpaired chiral Majorana fermion at each edge of the system. Therefore, the number-tunable unpaired chiral Majorana edge modes can be achieved in our proposed CLSS phase, which would offer an intriguing possibility pointing to braiding statistics and applications to topological quantum computing. Conclusion 0.01mm— We find a new type of topological lattice supersolid state of a single component Rydberg-dressed Fermi gas in an optical lattice, which arises from the unveiled effect induced by the competition between two distinct length scales, i.e., lattice period and the distance of resonant Rydberg-dressing. Such a scheme thus drummed up a new way of engineering RDI and new types of many-body phases can be achieved, which should be observable in future experiments. Acknowledgment 0.01mm— This work is supported by the National Key R&D Program of China (2021YFA1401700), NSFC (Grants No. 12074305, 12147137, 11774282), the National Key Research and Development Program of China (2018YFA0307600), Xiaomi Young Scholar Program. We also thank the HPC platform of Xi'An Jiaotong University, where our numerical calculations was performed. apsrev Supplementary Material: Chiral odd Chern number lattice supersolidity with tunable unpaired Majorana fermions in a Rydberg-dressed Fermi gas § MEAN-FIELD METHOD In this section, we will provide more details about the mean-field method. Under the mean-field approximation, the Hamiltonian in Eq.(1) of the main text can be rewritten in the momentum space as 𝐇_MF = ∑_𝐤ξ _𝐤c_𝐤^†c_𝐤+∑_𝐤(Δ (𝐤)/2c_ 𝐤^†c_-𝐤^†+h.c.) +∑_𝐐_𝐦=± 𝐐∑_𝐤(δ_𝐐_𝐦/4c_𝐤^†c_ 𝐤+𝐐_𝐦+h.c.)-E_I,. where ξ _𝐤=ε _𝐤-μ and E_I=1/2∑_i≠ jV_i-j(-n_in_j+|⟨ c_jc_i⟩ |^2). Here, ε _𝐤=-2t(cosk_xa+cosk_ya ) is the band dispersion and μ is the chemical potential. Through diagonalizing Eq. (S1) via the Bogoliubov method, we can obtain the mean-field ground state energy of the system as E_MF=∑_nE_nΘ (-E_n)+1/2∑_ 𝐤ξ _𝐤-E_I, where E_n labels the n-th eigenenergy of Eq. (S1) and Θ is the Heaviside step function. The order parameters defined in Eq. (S1) can be obtained by minimizing E _MF for a certain average filling of the system determined by the relation n_0=-1/N_L∂E_MF/∂ μ. § VARIATIONAL MONTE CARLO METHOD In this section, we will provide a detailed description of the variational Monte Carlo (VMC) method used in this work. The VMC method is one of promising methods to study strongly correlated systems  <cit.> and there is no sign problem in studies of fermionic systems since the weight of Monte Carlo sampling is positive definite. The wave function employed in our many-variable variational Monte Carlo (mVMC) simulation can be expressed as |ϕ _ref ⟩ =𝒫_J|ϕ _pair⟩, where |ϕ _pair⟩ =[∑_i,j=1^N_Lf_ijc_i^†c_j^†]^N/2| 0⟩ is the Pfaffian pairing wave function and 𝒫_J=exp[ 1/2 ∑_i≠ jv_ij( n_i-1) ( n_j-1) ] is the Jastrow factor, which accounts for long-ranged density correlations. Here N refers to the number of fermions. Such a flexible variational wavefunction with a large number of variational parameters can be simultaneously optimized by using the stochastic reconfiguration (SR) method, which can be applied to efficiently compute the ground state of our proposed system. § THE TOPOLOGICAL NATURE OF THE SYSTEM The topological nature of the system can be understood through the Bogliubov-de Gennes (BdG) Hamiltonian H_BdG=( [ ξ _𝐤 Δ (𝐤) δ 0; Δ ^∗(𝐤) -ξ _-𝐤 0 -δ; δ 0 ξ _𝐤+𝐐 Δ (𝐤+𝐐 ); 0 -δ Δ ^∗(𝐤+𝐐) -ξ _-𝐤 -𝐐 ] ) where the Nambu spinors are chosen as (c_𝐤^†,c_-𝐤 ,c_𝐤+𝐐^†,c_-𝐤-𝐐). To simplify the analysis, here we take the dominant component of superfluid order and Δ(𝐤) is approximated as Δ [sin(2k_xa) + i sin(2k_ya)]. Then, we apply a series of unitary transformations to the BdG Hamiltonian in Eq. (S2) and we obtain ℋ_BdG = Λ ^†H_BdGΛ = ( [ E_1^π Δ (𝐤) 0 0; Δ ^∗(𝐤) -E_1^π 0 0; 0 0 E_2^π Δ (𝐤); 0 0 Δ ^∗(𝐤) -E_2^π ] ) where Λ =T^-1Γ T, with T=( [ 1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1 ] ) and Γ =( [ Γ _CDW 0_2× 2; 0_2× 2 Γ _CDW ] ). Here, Γ _CDW can be constructed through the relation Γ _CDW^†( [ ξ _𝐤 δ; δ ξ _𝐤+𝐐 ] ) Γ _CDW=( [ E_1(𝐤) 0; 0 E_2(𝐤) ] ) with E_1(𝐤)=√(ξ _𝐤^2+δ^2) -μ and E_2(𝐤)=-√(ξ _𝐤^2+δ^2) -μ.Then, ℋ_BdG can be rewritten, i.e., ℋ_BdG≡( [ H_SF^^' 0_2× 2; 0_2× 2 H_SF^'' ] ), as shown in the main text. § EDGE EXCITATIONS To show the edge excitations of our proposed CLSS phase, we consider a cylinder geometry in the xy-plane, i.e., choosing the open (periodic) boundary conditions along the x(y) directions, respectively. Then, the edge excitations can be obtained through solving the following eigen-problem ∑_j_x( [ H_i_x,j_x(k_y) Δ _i_x,j_x(k_y) δδ _i_x,j_x 0; Δ _i_x,j_x^∗(k_y) -H_i_x,j_x(k_y) 0 -δδ _i_x,j_x; δδ _i_x,j_x 0 H_i_x,j_x(k_y^') Δ _i_x,j_x(k_y^'); 0 -δδ _i_x,j_x Δ _i_x,j_x^∗(k_y^') -H_i_x,j_x(k_y^') ] ) ( [ u_k_y,j_x^n; v_k_y,j_x^n; u_k_y^' ,j_x^n; v_k_y^' ,j_x^n ] ) =E_n( [ u_k_y,i_x^n; v_k_y,i_x^n; u_k_y^',i_x^n; v_k_y^',i_x^n ] ) , where the momentum k_y^'=k_y+π/a. H_i_x,j_x(k_y)=-t( δ _i_x+1,j_x+δ _i_x,j_x+1)+(-2tcos k_ya-μ )δ _i_x,j_x, Δ _i_x,j_x(k_y)=∑_m,nΔ _m,n/2i(e^inak_yδ _i_x+m,j_x-e^-inak_yδ _i_x,j_x+m).
http://arxiv.org/abs/2307.00397v2
20230701181227
Improving CNN-based Person Re-identification using score Normalization
[ "Ammar Chouchane", "Abdelmalik Ouamane", "Yassine Himeur", "Wathiq Mansoor", "Shadi Atalla", "Afaf Benzaibak", "Chahrazed Boudellal" ]
cs.CV
[ "cs.CV" ]
Improving CNN-based Person Re-identification using score Normalization Ammar Chouchane1, Abdelmalik Ouamane1, Yassine Himeur7, Wathiq Mansoor7, Shadi Atalla7, Afaf Benzaibak1 and Chahrazed Boudellal1 1 Laboratory of LI3C, University of Biskra, Biskra, Algeria 7College of Engineering and Information Technology, University of Dubai, Dubai, UAE =========================================================================================================================================================================================================================================================================================== Person re-identification (PRe-ID) is a crucial task in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. However, it is a challenging task due to changes in illumination, background, and viewpoint. Efficient feature extraction and metric learning algorithms are essential for a successful PRe-ID system. This paper proposes a novel approach for PRe-ID, which combines a Convolutional Neural Network (CNN) based feature extraction method with Cross-view Quadratic Discriminant Analysis (XQDA) for metric learning. Additionally, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores is implemented. The proposed approach is tested on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S. The proposed approach has demonstrated its effectiveness through promising results obtained from the four challenging datasets. PRe-ID, Score Normalization, XQDA, CNN, feature extraction. § INTRODUCTION Person re-identification, or PRe-ID, involves recognizing an individual across different images or videos captured in a surveillance system <cit.>. This is critical in various real-time applications such as person retrieval, video monitoring, public safety, long-term human behavior analysis, and cross-camera tracking <cit.>. The use of CNN models has become popular in recent deep learning architectures, either through the development of new models or through the use of pretrained models known as transfer learning <cit.>. The process of person re-identification, which involves matching individuals detected by different cameras, usually consists of two main steps, as shown in Figure <ref>. These steps are: (1) Feature extraction (FE), which involves obtaining more reliable, robust, and concise features than raw pixel data, and (2) learning the system with ample data to allow it to perform re-identification automatically during online testing. The Gaussian of Gaussian (GOG) and Local Maximal Occurrence (LOMO) descriptors are the two most commonly used FE methods in the field of person re-identification <cit.>. GOG descriptor, proposed by Matsukawa et al. <cit.>, involves dividing the image into k rectangular blocks and representing each block by 4 Gaussian distributions in different color spaces (RGB, Lab, HSV, and nRnG). On the other hand, LOMO descriptor, introduced by Liao et al. <cit.>, extracts two types of features (scale-invariant local ternary pattern and HSV color histograms) from the imageS by dividing it into horizontal patches and calculating the occurrence of local geometric features. The aim of metric learning is to learn a metric that can effectively compare two pedestrian images. Popular examples of metric learning approaches include KISSME <cit.> and Cross-view Quadratic Discriminate Analysis (XQDA) <cit.>. The PRe-ID system uses three main approaches to tackle the problem, as shown in Fig. 2. These approaches include Feature Descriptor Learning, Metric Learning, and Deep Learning. The Feature Descriptor Learning methods aim to learn distinctive and representative features from pedestrian images that distinguish the appearances of individuals in the datasets <cit.>. Many effective techniques have been proposed in this category, such as Gaussian of Gaussian (GOG) <cit.> and LOMO (Local Maximal Occurrence) descriptors <cit.> as well as various other techniques discussed in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Metric learning is a technique utilized to enhance the precision of machine learning models. It trains a model to compute the similarity between pedestrians images captured from different cameras to achieve greater matching accuracy <cit.>. On the other hand, deep learning is a sophisticated task that has gained popularity for enhancing PR systems and achieving high performance <cit.>.The deep learning approaches can further be classified into three categories such as CNN, RNN, and GAN-based methods. Figure <ref> summarizes a taxonomy of PRe-ID of approaches. Overall, the main contributions of this work can be summarized as follows: * Utilizing CNN features as a transfer learning process for effective feature representation. * Enhancing the discriminative power in person re-identification by implementing Cross-view Quadratic Discriminant Analysis (XQDA), a robust metric learning method that employs the Mahalanobis distance for matching. * Applying a score normalization technique which greatly improved results on four benchmark datasets: VIPeR <cit.>, GRID <cit.>, CUHK01 <cit.>, and PRID450s <cit.>. * Evaluating and comparing the proposed approach against the state-of-the-art methods. The paper is organized as follows: Section 2 outlines the methodology and details of the proposed approach, including the CNN feature model, the XQDA metric learning technique, and the score normalization process. Experimental results are reported in Section 3. Finally, the conclusions and future work are discussed in Section 4. § METHODOLOGY AND DESCRIPTION OF THE PROPOSED APPROACH §.§ FE based on a pretrained model of CNN Person Re-identification (Pre-ID) entails the process of recognizing an individual by correlating the identity of a probe image to a set of images. This task is fundamentally confronted by two main challenges: Feature Extraction (FE) and metric learning. In an effort to address these obstacles, this study leverages a pretrained CNN model derived from the ImageNet dataset in conjunction with the XQDA metric learning method, which exhibited efficacy in our experimental trials. Specifically, the study obtained 4,096-dimensional feature vectors by extracting the features from the fully connected layer 7 (FC7) of the learned CNN, as delineated in Figure <ref>. The XQDA metric learning method was deployed to bolster the discriminative capability of the target dataset for the task of person re-identification. The research utilized the AlexNet architecture, comprising eight layers in total; the initial five are convolutional, while the last three are fully connected. In particular, the fully connected seventh layer serves as the feature vector, representing the unique biometric signature of each individual within the dataset. §.§ Features projection and metric learning using XQDA algorithm The XQDA metric learning algorithm is a modification of the KISS metric algorithm <cit.>. XQDA learns a discriminative feature subspace in a low-dimensional space through the use of Fisher criterion <cit.>. It is preferred to have a lower dimensional space (^r, where r < d) as the initial dimension of the feature vector d is quite large, leading to better classification results. The intra-person set difference (X_s) of n similar pairs is represented as a matrix ^d× n, while the extra-person difference set (X_D) of n dissimilar pairs is represented as a matrix ^d× n, where d is the dimension of the feature vectors. Each column in X_s indicates the difference between a similar pair, and each column in X_D represents the difference between a dissimilar pair. The covariance matrices are calculated using the following equations: Σ_s =1/n X_s X^T_s Σ_D =1/n X_D X^T_D XQDA calculates the distance between the training samples (μ and ν) from two camera views as follows: d(μ ,ν) = (μ - ν)^T W ( Σ^-1_s' - Σ^-1_D') W^T (μ - ν) The new subspace projection matrix (W) to be learned by XQDA is represented in this equation. The covariance matrices, Σ_s' and Σ_D', are obtained by transforming Σ_s and Σ_D respectively through W^T. These variances of the subspace are used to differentiate between intra-person and extra-person differences. As a result, the projection direction W is estimated through the following equation. J(W) = W^T Σ_s W/W^T Σ_D W The equation can be converted into a generalized eigenvalue decomposition problem. The r eigenvectors corresponding to the largest eigenvalues of Σ^-1D Σ_s form the projection matrix W = [w_1, w_2, ..., w_r]. The output metric matrix is represented as M = Σ^-1s' - Σ^-1_D', which can be used to calculate the distance between two given feature vectors. §.§ Matching based Mahalanobis distance The Mahalanobis distance is a popular and effective metric for comparing the similarity between two data points. It is often utilized to improve the classification process <cit.>. Given m data points x_i ∈^m, the objective is to find a matrix M such that the distance metric is expressed as follows. d_M=(x_i,x_j)^T M (x_i,x_j) Where x_i and x_j are two vectors (samples), and M is a positive semidefinite matrix. §.§ Score normalization The Mahalanobis distance is calculated using global samples from different cameras with varying image resolutions, resulting in heterogeneous scores. To make these scores comparable, score normalization is performed. A min-max normalization function was applied in this work, which addresses bias and variation that could affect the comparison scores <cit.>.Score normalization aim to mitigate the variations in similarity scores that arise from different camera viewpoints. By normalizing the scores, we can effectively reduce the influence of these factors and enhance the discriminative power of the system. This normalization step improved the results and is defined as follows. N=x-x_min/x_max - x_min where x_min is the minimum score vector and where x_max is the maximum score vector. § IMPLEMENTATIONS AND RESULTS §.§ Datasets and protocols The proposed approach utilizing CNN features has been tested on four demanding datasets: PRID450s, VIPeR, GRID and CUHK01. The traits of these datasets are detailed in Table <ref>. Examples from the utilized datasets are depicted in Figure <ref>. §.§ Evaluation metrics To assess the effectiveness of our proposed approach in PRe-ID systems, we employed a 10-fold cross-validation methodology. This involved randomly dividing the image set into ten subsets, with nine sets used for training and one set for testing in each fold. To evaluate the performance, we used the Cumulative Matching Characteristic (CMC) metric, which is a popular method for re-identification performance evaluation. The CMC curve displays the rank matching rates, where a rank "r" matching rate refers to the percentage of probe images that have correct matches within the top "r" ranks among the gallery images. To conduct our experiments, we split the datasets randomly into training and test sets, with an equal number of individuals in each set. It is worth noting that for the GRID dataset, the gallery collection now includes an additional 775 pictures. The number of probe images in all datasets is equivalent to the number of gallery images. To evaluate the effectiveness of the proposed strategy, we employed a 10-fold cross-validation,in which, with CMC curve averages being reported. The CMC score represents the success rate of accurately identifying the matched image at each rank. Specifically, we focused on the accuracy at rank-1 since it indicates the probability of the correct image being displayed as the top result. In addition, we also considered the accuracy at ranks 5, 10, and 20 during the evaluation process. §.§ Discussions To address the challenges of person re-identification, we investigated the effectiveness of the XQDA metric learning technique, along with the Mahalanobis distance metric, and hence helps in enhancing the classification outcomes. We also employed score normalization to eliminate any potential biases or discrepancies in the scores, which could lead to unfair comparisons. Our results showed that the proposed method was effective in achieving improved performance on the PRID450s, VIPeR, GRID, and CUHK01 datasets, as demonstrated by the CMC curves. In particular, the CMC curves with score normalization were significantly higher than those without it, which highlights the importance of this step in achieving fair comparisons. These findings suggest that the proposed XQDA metric learning approach, along with the Mahalanobis distance metric and score normalization, can significantly improve the accuracy of person re-identification. The application of this technique may have significant implications for the development of robust and effective surveillance systems in the future. However, it is important to note that further studies are needed to assess the generalizability of these findings to other datasets and scenarios. Furthermore, to provide a comprehensive overview of the re-identification accuracy for each dataset, we included a table (Table <ref>) that reports the performance metrics at various ranks, including rank-1, rank-5, rank-10, and rank-20. To better understand the impact of score normalization on the re-identification accuracy, Figure <ref> specifically highlights the rank-1 performance in terms of the CMC curves of each dataset without score normalization. Additionally, Figure <ref> portrays the accuracy performance of the proposed scheme on the four datasets.The results of our experiments suggest that the proposed XQDA metric learning approach, in combination with the Mahalanobis distance metric and score normalization, can effectively improve the performance of person re-identification on the four challenging datasets. As indicated in Table <ref>, the score normalization improved the rank-1 rate on all datasets. Without normalization, the rank-1 rates were 42.16%, 25.20%, 41.51% and 64.22% for VIPeR, GRID, CUHK01, and PRID450s respectively. With score normalization, the rates improved to 43.16%, 35.68%, 45.24% and 64.32% respectively. The GRID dataset showed a particularly significant improvement of 10.48% in rank-1 after score normalization. We attribute the improvement in the re-identification system to the use of score normalization on our datasets. §.§ comparison with the state-of-the-art In this particular section, we assess the effectiveness of the proposed approach by contrasting it with several established re-identification approaches based on Rank-1 rate.The summarized findings of this comparison can be observed in table <ref>. From the previous table, we can observe the strength of our approach, as we achieved almost the best results in all four databases. This confirms the robustness of our proposed approach despite the variations in the data. The impressive performance exhibited by our method across multiple datasets highlights its capability to handle different scenarios and reinforces its potential for real-world applications. § CONCLUSION This research work addresses the challenging task of PRe-ID in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. The proposed approach combines a CNN-based FE method with Cross-view XQDA for metric learning. In addition, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores are also implemented. The evaluation results on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S, demonstrate the effectiveness of the proposed approach. The implementation of score normalization shows significant improvement in the different rank rate accuracies of the datasets. 10 url@samestyle himeur2023video Y. Himeur, S. Al-Maadeed, H. Kheddar, N. Al-Maadeed, K. Abualsaud, A. Mohamed, and T. Khattab, “Video surveillance using deep transfer learning and deep domain adaptation: Towards better generalization,” Engineering Applications of Artificial Intelligence, vol. 119, p. 105698, 2023. liu2021prgcn H. Liu, Z. Xiao, B. Fan, H. Zeng, Y. Zhang, and G. Jiang, “Prgcn: Probability prediction with graph convolutional network for person re-identification,” Neurocomputing, vol. 423, pp. 57–70, 2021. himeur2022deep Y. Himeur, S. Al-Maadeed, N. Almadeed, K. Abualsaud, A. Mohamed, T. Khattab, and O. Elharrouss, “Deep visual social distancing monitoring to combat covid-19: A comprehensive survey,” Sustainable cities and society, p. 104064, 2022. chouchane2023new A. Chouchane, M. Bessaoudi, E. Boutellaa, and A. Ouamane, "A new multidimensional discriminant representation for robust person re-identification," Pattern Analysis and Applications, pp. 1-14, 2023. elharrouss2021panoptic O. Elharrouss, S. Al-Maadeed, N. Subramanian, N. Ottakath, N. Almaadeed, and Y. Himeur, “Panoptic segmentation: A review,” arXiv preprint arXiv:2111.10250, 2021. himeur2023face Y. Himeur, S. Al-Maadeed, I. Varlamis, N. Al-Maadeed, K. Abualsaud, and A. Mohamed, “Face mask detection in smart cities using deep and transfer learning: lessons learned from the covid-19 pandemic,” Systems, vol. 11, no. 2, p. 107, 2023. prates2019kernel R. Prates and W. R. Schwartz, “Kernel cross-view collaborative representation based classification for person re-identification,” Journal of Visual Communication and Image Representation, vol. 58, pp. 304–315, 2019. gou2018systematic M. Gou, Z. Wu, A. Rates-Borras, O. Camps, R. J. Radke et al., “A systematic evaluation and benchmark for person re-identification: Features, metrics, and datasets,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 3, pp. 523–536, 2018. matsukawa2016hierarchical T. Matsukawa, T. Okabe, E. Suzuki, and Y. Sato, “Hierarchical gaussian descriptor for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1363–1372. liao2015person S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 2197–2206. gray2008viewpoint D. Gray and H. Tao, “Viewpoint invariant pedestrian recognition with an ensemble of localized features,” in Computer Vision–ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part I 10.1em plus 0.5em minus 0.4emSpringer, 2008, pp. 262–275. chouchane2022face A. Chouchane, M. Bessaoudi, A. Ouamane, and O. Laouadi, “Face Kinship Verification Based VGG16 and new Gabor Wavelet Features,” in 2022 5th International Symposium on Informatics and its Applications (ISIA), IEEE, 2022, pp. 1–6. himeur2018robust Y. Himeur and K. A. Sadi, “Robust video copy detection based on ring decomposition based binarized statistical image features and invariant color descriptor (rbsif-icd),” Multimedia Tools and Applications, vol. 77, pp. 17 309–17 331, 2018. prosser2008multi B. J. Prosser, S. Gong, and T. Xiang, “Multi-camera matching using bi-directional cumulative brightness transfer functions.” in BMVC, vol. 8.1em plus 0.5em minus 0.4emCiteseer, 2008, pp. 164–1. su2017attributes C. Su, S. Zhang, F. Yang, G. Zhang, Q. Tian, W. Gao, and L. S. Davis, “Attributes driven tracklet-to-tracklet person re-identification using latent prototypes space mapping,” Pattern Recognition, vol. 66, pp. 4–15, 2017. su2017multi C. Su, F. Yang, S. Zhang, Q. Tian, L. S. Davis, and W. Gao, “Multi-task learning with low rank attribute embedding for multi-camera person re-identification,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 5, pp. 1167–1181, 2017. wei2018person L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer gan to bridge domain gap for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 79–88. chen2021person G. Chen, T. Gu, J. Lu, J.-A. Bao, and J. Zhou, “Person re-identification via attention pyramid,” IEEE Transactions on Image Processing, vol. 30, pp. 7663–7676, 2021. yang2018incremental Z. Yang, Y. Wu, J. Cheng, S. Peng, L. Wang, and D. Tao, “Incremental xqda metric learning for person reidentification,” in 2018 IEEE International Conference on Information and Automation (ICIA).1em plus 0.5em minus 0.4emIEEE, 2018, pp. 433–438. javed2008modeling O. Javed, K. Shafique, Z. Rasheed, and M. Shah, “Modeling inter-camera space–time and appearance relationships for tracking across non-overlapping views,” Computer Vision and Image Understanding, vol. 109, no. 2, pp. 146–162, 2008. zhang2016learning L. Zhang, T. Xiang, and S. Gong, “Learning a discriminative null space for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1239–1248. gavini2023thermal Y. Gavini, A. Agarwal, and B. Mehtre, “Thermal to visual person re-identification using collaborative metric learning based on maximum margin matrix factorization,” Pattern Recognition, vol. 134, p. 109069, 2023. chang2018multi X. Chang, T. M. Hospedales, and T. Xiang, “Multi-level factorisation net for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2109–2118. song2019generalizable J. Song, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Generalizable person re-identification by domain-invariant mapping network,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2019, pp. 719–728. liu2021end Y. Liu, W. Zhou, J. Liu, G.-J. Qi, Q. Tian, and H. Li, “An end-to-end foreground-aware network for person re-identification,” IEEE Transactions on Image Processing, vol. 30, pp. 2060–2071, 2021. ming2022deep Z. Ming, M. Zhu, X. Wang, J. Zhu, J. Cheng, C. Gao, Y. Yang, and X. Wei, “Deep learning-based person re-identification methods: A survey and outlook of recent works,” Image and Vision Computing, vol. 119, p. 104394, 2022. loy2010time C. C. Loy, T. Xiang, and S. Gong, “Time-delayed correlation analysis for multi-camera activity understanding,” International Journal of Computer Vision, vol. 90, pp. 106–129, 2010. li2013human W. Li, R. Zhao, and X. Wang, “Human reidentification with transferred metric learning,” in Computer Vision–ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012, Revised Selected Papers, Part I 11.1em plus 0.5em minus 0.4emSpringer, 2013, pp. 31–44. roth2014mahalanobis P. M. Roth, M. Hirzer, M. Köstinger, C. Beleznai, and H. Bischof, “Mahalanobis distance learning for person re-identification,” Person re-identification, pp. 247–267, 2014. koestinger2012large M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof, “Large scale metric learning from equivalence constraints,” in 2012 IEEE conference on computer vision and pattern recognition.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 2288–2295. nautsch2019privacy A. Nautsch, J. Patino, A. Treiber, T. Stafylakis, P. Mizera, M. Todisco, T. Schneider, and N. Evans, “Privacy-preserving speaker recognition with cohort score normalisation,” arXiv preprint arXiv:1907.03454, 2019. martinel2015kernelized Martinel, N., Micheloni, C., & Foresti, G. L. (2015). Kernelized saliency-based person re-identification through multiple metric learning. IEEE Transactions on Image Processing, 24(12), 5645-5658. sun2016person Sun, C., Wang, D., & Lu, H. (2016). Person re-identification via distance metric learning with latent variables. IEEE Transactions on Image Processing, 26(1), 23-34. bak2017one-shot Bak, S., & Carr, P. (2017). One-shot metric learning for person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2990-2999). wang2018transferable Wang, J., Zhu, X., Gong, S., & Li, W. (2018). Transferable joint attribute-identity deep learning for unsupervised person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2275-2284). ma2020new Ma, W., Han, H., Kong, Y., & Zhang, Y. (2020). A new date-balanced method based on adaptive asymmetric and diversity regularization in person re-identification. International Journal of Pattern Recognition and Artificial Intelligence, 34(09). prasad2022spatio-temporal Prasad, M. V., & Balakrishnan, R. (2022). Spatio-Temporal association rule based deep annotation-free clustering (STAR-DAC) for unsupervised person re-identification. Pattern Recognition, 122. ouamane2017efficient A. Ouamane, A. Chouchane, E. Boutellaa, M. Belahcene, S. Bourennane, and A. Hadid, "Efficient tensor-based 2D+3D face verification," IEEE Transactions on Information Forensics and Security, vol. 12, no. 11, pp. 2751-2762, 2017. bessaoudi2021multilinear M. Bessaoudi, A. Chouchane, A. Ouamane, and E. Boutellaa, "Multilinear subspace learning using handcrafted and deep features for face kinship verification in the wild," Applied Intelligence, vol. 51, pp. 3534-3547, 2021.
http://arxiv.org/abs/2307.02761v1
20230706035419
Cross-Modal Content Inference and Feature Enrichment for Cold-Start Recommendation
[ "Haokai Ma", "Zhuang Qi", "Xinxin Dong", "Xiangxian Li", "Yuze Zheng", "Xiangxu Mengand Lei Meng" ]
cs.IR
[ "cs.IR" ]
Cross-Modal Content Inference and Feature Enrichment for Cold-Start Recommendation ^* indicates corresponding author. Haokai Ma1, Zhuang Qi1, Xinxin Dong1, Xiangxian Li1, Yuze Zheng1, Xiangxu Meng1 and Lei Meng^*1,2 1 School of Software, Shandong University, Jinan, China 2 Shandong Research Institute of Industrial Technology, Jinan, China Email: {mahaokai, z_qi, dongxinxin, xiangxian_lee, zhengyuze}@mail.sdu.edu.cn, {mxx, lmeng}@sdu.edu.cn August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================== Multimedia recommendation aims to fuse the multi-modal information of items for feature enrichment to improve the recommendation performance. However, existing methods typically introduce multi-modal information based on collaborative information to improve the overall recommendation precision, while failing to explore its cold-start recommendation performance. Meanwhile, these above methods are only applicable when such multi-modal data is available. To address this problem, this paper proposes a recommendation framework, named Cross-modal Content Inference and Feature Enrichment Recommendation (CIERec), which exploits the multi-modal information to improve its cold-start recommendation performance. Specifically, CIERec first introduces image annotation as the privileged information to help guide the mapping of unified features from the visual space to the semantic space in the training phase. And then CIERec enriches the content representation with the fusion of collaborative, visual, and cross-modal inferred representations, so as to improve its cold-start recommendation performance. Experimental results on two real-world datasets show that the content representations learned by CIERec are able to achieve superior cold-start recommendation performance over existing visually-aware recommendation algorithms. More importantly, CIERec can consistently achieve significant improvements with different conventional visually-aware backbones, which verifies its universality and effectiveness. Cross-Modal, Content Inference, Feature Enrichment, Cold-Start Recommendation § INTRODUCTION Personalized recommendation aims to capture the users' preference and provide them with the appropriate items <cit.>. However, the cold-start problem is a ubiquitous challenge in recommendation, which leads to bias in the training of traditional collaborative filtering recommenders and visually-aware recommenders (e.g., Matrix Factorization (MF) <cit.> and Visual Bayesian Personalized Ranking (VBPR) <cit.>). That is, the randomization problem and the popularity bias problem. Content-based recommendation, which leverage the multi-modal information to improve its cold-start performance, has attracted considerable attention <cit.>. However, the data in web and mobile applications is diverse and unstable <cit.>, and the performance of existing algorithms is generally limited by the learning of heterogeneous multi-modal representations, which is not always available. Therefore, robust cross-modal inference methods for heterogeneous modal cold-start recommendation are urgently needed. Existing cold-start recommendation methods can be categorized into two classes, based on the targets they aim at. One class is the information-based methods, which generally incorporate heterogeneous content features from the user's or item's auxiliary information <cit.> into the cold-start recommendation. Another class is representation-based methods, which are able to learn the fine-grained representations by dynamically optimizing the recommendation models <cit.>, thus improving the cold-start recommendation performance. These features are mainly obtained by encoding the items' multi-modal content information, and the accuracy and stability of these algorithms are decreased when these heterogeneous information is not available. To address these issues, we present CIERec, a novel Cross-modal Content Inference and Feature Enrichment Cold-start Recommendation framework. It introduces image annotation as the privileged information to guide the mapping progress of the content representation from the visual space to the semantic space and enriches it by fusing the collaborative, visual, and inferred semantic features to improve its cold-start performance. Figure <ref> shows the main framework of CIERec. Specifically, we first model the collaborative interactions in the collaborative representation learning (CRL) module. We then extract the item's uniform embedding with a traditional visual encoder(e.g. ResNet18) in the source-modal representation learning (SMRL) module. Next, to deal with the difficulty of mapping heterogeneous features, we propose a novel cross-modal inference strategy to map content representations from the visual space to the semantic space with the guidance of prior knowledge at the cross-modal representation learning (CMRL) module, which design follows a learning paradigm called learning using privileged information (LUPI) <cit.>. Finally, a multi-modal fusion method is used to integrate the user embeddings, the item embeddings, the visual features, and the inferred semantic features for the final recommendation in the multi-modal representation fusion (MRF) module. As observed, CIERec is able to alleviate the absence of heterogeneous modalities in cold-start recommendations through cross-modal inference, thereby improving the stability and accuracy of existing visually-aware cold-start recommendation models. To validate the effectiveness of the proposed CIERec, we conduct the performance comparison on the pre-processed Allrecipes <cit.> and Amazon_CDs <cit.> datasets against previous advanced visually-aware recommendation algorithms. We also conduct extensive experiments, including the ablation study to verify the effectiveness of the different components in the proposed CIERec, and the case study to visualize the distributions of heterogeneous modal representations. The experimental results demonstrate that the advantages of Tri-CDR are: (1) the privileged information can help to model the mapping relationship of content information in the visual and semantic space. (2)The dual-gating module in the CMRL module can optimize the cross-modal inference process of heterogeneous representations. (3) CIERec can further improve its cold-start performance with the fusion of multi-modal representations, allowing the downstream task to focus on the user's preference information from multiple perspectives. Furthermore, CIERec achieves consistent and significant improvements over the existing visually-aware recommendation methods with different benchmarks (e.g., MF<cit.> and VBPR<cit.>), proving its effectiveness and universality. Overall, the main contributions are summarized as follows: * This paper proposes a novel Cross-modal Content Inference and Feature Enrichment Recommendation framework, CIERec, which can improve the cold-start performance of existing visually-aware recommendation methods through cross-modal semantic inference and multi-modal representation fusion. * We design a content-enriched cross-modal inference strategy to model the heterogeneous representation inference process and extract the fine-grained multi-modal representations based on the leverage of privilege information. It can be applied as a cross-modal inference module for the general task of multi-modal representation learning. * We conduct extensive experiments to verify the effectiveness of the proposed CIERec on multiple datasets with different visually-aware recommendation models. The experimental results demonstrate CIERec's effectiveness, universality, and stability. § RELATED WORK §.§ Multi-modal Learning in Recommendation As a common multimedia analysis method, multi-modal learning has been widely used in the fields of computer vision <cit.>, data mining <cit.>, information retrieval <cit.>, and recommendation <cit.>. In recommendation, multimedia recommendation aims to incorporate the items' content information to model the fine-grained representation and thereby improve its performance. These methods can be classified as multi-source information embedding methods <cit.> and heterogeneous information inference methods <cit.>, according to the available information. The multi-source information embedding methods refer to the introduction of multiple sources of heterogeneous information from the heterogeneous information networks <cit.> and the knowledge graph <cit.> as the content information to complement the collaborative information, which can decrease the dependence of the recommendation model on the interaction information. Heterogeneous information inference methods aim to learn the mapping function from one modality space to another modality space, thus making the content representation extracted from the same item has similar distributions. These above methods typically perform cross-modal inference based on the user's interactive information <cit.> or multi-modal information <cit.> of items, thus reducing the gap between the user's interest manifold and the visual semantic manifold. However, existing cross-modal content inference methods are rarely demonstrated against their cold-start recommendation performance, and lack the in-depth analysis of the effectiveness of the source-modal information and the inferred information for recommendation. §.§ Visually-aware Recommendation With the development of image analysis techniques in the field of CV <cit.> and NLP <cit.>, a series of works have validated that the incorporation of visual features of items can improve recommendation performance <cit.>. Inspired by the significant performance of Convolution Neural Network (CNN) in image classification, most visually-aware recommendation methods incorporate pre-extracted features as the item's embedding into the recommendation <cit.>. However, previous visually-aware recommendation methods with pre-extracted features typically utilize the items' visual features related to their category information while ignoring the users' personalized interests. Therefore, recent visually-aware recommendation methods extract visual features with an end-to-end approach to jointly optimize image encoders and recommenders. Deepstyle <cit.> replaces collaborative representations with the real-time extraction of the items' visual features to capture the user's multi-dimensional preferences. PiNet <cit.> learns the visual features that contain both visual, semantic, and collaborative information by jointly optimizing the image encoder. However, these algorithms only take into account the visual information of the items, leading to the suboptimal performance. §.§ Cold-Start Recommendation The cold start problem is common in recommendation due to the imbalance of interactions, where a few users or items dominate the interactions in the dataset. It may lead to the 'popular bias' <cit.>, whereby active users perform better than cold users and popular items are more likely to be recommended than cold items. Existing cold-start recommendation methods can be categorized into two categories by their targets, namely information-based methods that introduce auxiliary data (e.g., heterogeneous information <cit.> and social information <cit.>) into recommendation, and the representation-based methods that modify recommendation models dynamically through meta-learning <cit.>, transfer learning <cit.>, contrastive learning <cit.> and other techniques. However, these algorithms have difficulty in achieving idealized recommendations performance when heterogeneous information is not available for certain modalities. To address this problem, existing studies usually deal with interaction information specifically. DropoutNet <cit.> adopts the dropout mechanism for the observed interactions to improve the model generalization capability. A series of studies <cit.> attempt to leverage the generalization capabilities of adversarial learning, targeting at augmenting the unobserved interactions. § TECHNIQUE §.§ Framework Overview CIERec introduces a cross-modal inference and feature enrichment framework to enable the enrichment of content representations through feature-level cross-modal inference. As shown in Figure <ref>, CIERec can be divided into four main modules, including the Collaborative Representation Learning (CRL) module, the Source-Modal Representation Learning (SMRL) module, the Cross-Modal Representation Learning (CMRL) module, and the Multi-modal Representation Fusion (MRF) module, as illustrated in the following sections. §.§ Collaborative Representation Learning (CRL) As shown in Figure <ref>, CRL learns the collaborative representation p_u and the item collaborative representation q_i from the embedding matrix based on the randomly sampled user u and item i, which is similar to the traditional collaborative filtering method. The learning process can be expressed as: p_u=emb_u( u ) q_i=emb_i( i ) where emb_u(.) denotes the embedding matrix of users, and emb_i(.) denotes the embedding matrix of items. §.§ Source-Modal Representation Learning (SMRL) CIERec generates visual feature v_i in the SMRL module to complement the collaborative representation q_i. Specifically, as illustrated in Figure <ref>, the SMRL module extracts the uniform embedding ℰ(f) ↦e_i from the image f through the visual encoder ℰ(.). Inspired by the dual-gating mechanism <cit.>, we develop a novel multimodal-gating function to generate the visual feature 𝒯_v(e_i) ↦v_i by mapping with the visual-aware gate 𝒯_v, and constrains its optimization with the gradient-regularization gate ℛ. The overall computational equation can be expressed as: v_i=𝒯_v(ℰ( f )) There is significant heterogeneity between the image regions focused by visual feature v_i and semantic feature s_i, which is difficult to be directly mapped from the uniform embedding e_i. Therefore, CIERec proposes a visual-aware gate to control the delivery of visual information. The visual-aware gate 𝒯_v(.) of CIERec introduces a self-learning gate vector g_v and the user representation p_u in addition to the uniform embedding e_i, and maps them to the visual feature space, which is defined as follows: v_i=𝒯_v(e_i)=MLP^v(e_i ⊙δ(p_ue_ig_v)/max(e_i ⊙δ(p_ue_ig_v)_2, ϵ)) where denotes the concatenate operator, ⊙ denotes the dot product operator, ._2 denotes the ℓ_2 regularization function, ϵ = 1e^-12 denotes a small value to avoid division by zero, δ(.) denotes a fully connected layer and MLP^v(.) denotes a fully connected layer followed by the LeakyReLU activation function. We further describe the gradient-regularization gate ℛ in Sec. <ref>. §.§ Cross-modal Representation Learning (CMRL) In addition to generating the visual feature v_i, we firstly performs cross-modal semantic inference based on the inference gate 𝒯_i to learn the inference feature c_i from the unified embedding e_i; And then we conducts semantic fusion based on the prior knowledge, that is, fuses the semantic knowledge s of image annotation t with the cross-modal inference feature c_i to generate semantic feature s_i via the semantic-aware gate 𝒯_s, which in turn complements the multimodal representation. §.§.§ Semantic Inference In preliminary experiments, we found that the incorporation of semantic information can significantly improve the recommendation performance of traditional visually-aware recommendation methods. Existing multimedia recommendation methods generally rely on the modal richness of the recommendation datasets, which performs poorly when semantic information is unavailable. To address this issue, the CMRL module generates the cross-modal inference feature 𝒯_i(e_i)↦c_i by mapping from the unified embedding e_i with the inference gate 𝒯_i(.) and uniformly optimizes it with the gradient-regularization gate ℛ. It is defined as follows: c_i=e_i ⊙δ(e_i g_s)/max(e_i ⊙δ(e_i g_s)_2, ϵ) where g_s denotes the learnable inference gate vector, denotes the concatenate operator, ._2 denotes the ℓ_2 regularization function, ϵ = 1e^-12 denotes a small value to avoid division by zero and c_i denotes the inferred feature. §.§.§ Semantic fusion for privileged information The inferred feature c_i derived from the uniform embedding e_i contains a certain amount of redundant information, leading to bias in the subsequent recommendation process. To address this issue, the CMRL module also introduces the prior semantic information to improve the model's ability to represent semantic modalities. The CMRL module uses LSTM <cit.> as the building block for the semantic element encoder to capture the relationship between the prior semantic representation and the inferred representation. The process of extracting the semantic embedding 𝒯_s(c_i)↦s_i can be represented as: s_i = LSTM(c_i ŝ_i) where ŝ_i denotes the privileged knowledge encoded from the image annotation, LSTM(.) denotes the the LSTM operator, s_i denotes the calculated semantic embedding. §.§.§ Gradient-regularization Gate CIERec fuses the gradients of the two heterogeneous representations with the gradient-regularization gate ℛ, aiming to allow the visual encoder to trade off the visual feature v_i and the inferred semantic feature s_i from the unified embedding e_i during the backpropagation. The visual gradient-regularization gate is implemented based on the deep Q-network (DQN) <cit.> approach. It selects the action s^( t) via a classifier MLP_DQN^v to map the 5D state vector of s^( t-1) at batch t, where MLP_DQN^v denotes a fully-connected layer followed by a Softmax activation function. And then it penalizes this action by the J_v^(t)=exp(-ℒ_v^(t+1)) obtained by the feedback from the recommendation model, with the loss function as following: ℒ_dqn=-logσ(s_max^(t)· J_v^(t)) where s_max^(t) is the probablity of selecting s^(t), and σ(.) is the Sigmoid Function. Similarly, we have J_s^(t)=exp(-ℒ_s^(t+1)) for the semantic gradient-regularization gate. §.§ Multimodal Representation Fusion (MRF) Module To demonstrate the universality of CIERec, we take the traditional collaborative filtering algorithms MF and VBPR as the backbone models for this module, which represent user and product as an embedding vector, with the core idea of estimating the user's preference as the inner product of their embedding vectors<cit.>. In addition to the user representation p_u and the collaborative representation q_i, MRF module also receives the visual embedding v_i and the semantic embedding s_i for recommendation. The fusion operation of multi-modal representations can be represented as: f_i=𝒯_f(q_i, v_i, s_i)=MLP(q_i v_i s_i) where 𝒯_f denotes the fusion gate of the multi-modal representations, MLP(.) denotes a fully connected layer with a LeakyReLU activation function, and f_idenotes the multi-modal fused representation of item i. The process of calculating preference scores for the BPR-MF and VBPR algorithms is defined as following: ŷ_MF=α+β_u+β_i+p_u^⊤f_i ŷ_VBPR=α+β_u+β_i+β_c+p_u^⊤f_i+a_u^⊤c_i where α denotes the self-learning global bias, β_u, β_i and β_c denote the self-learning bias of user u, item i and the content c respectively, a_u denotes the implicit representation, ŷ_MF and ŷ_VBPR denote the preference score of MF and VBPR. § EXPERIMENTS §.§ Experimental Setup §.§.§ Datasets We conduct experiments on two real-world recommendation datasets, where Allrecipes was constructed by Gao <cit.> and Amazon_CDs was extracted from the original Amazon dataset <cit.> to meet the needs for this task. To verify the effectiveness of CIERec in alleviating the cold-start problems in recommendation, we divided Allrecipes and Amazon_CDs into two parts with the interaction boundary of 3, i.e., the cold-start set with few interactions and the warm-start set with a relatively large number of interactions. Table <ref> summarizes the statistics of the datasets. Both datasets follow the data partitioning method used in Allrecipes, where the train set includes the earliest 60% of the interaction data for each user, the test set includes the latest 30% of the interaction data, and the remaining 10% is used as the valid set. §.§.§ Evaluation measures Following the classical cold-start recommendation works <cit.>, two widely-used metrics are adopted to evaluate the performance of the cold start recommendation, including Recall (R) and Normalized discounted cumulative gain (NDCG)<cit.>. Following <cit.>, we randomly select one negative sample for each positive sample in training, while in testing five hundred items are randomly selected as the negative samples (have no interaction with the user) from the dataset along with all positive items (have interactions with the user) to form the ranking candidate for each user. R@K and NDCG@K calculate the performance of positive samples in the Top-k ranking items for all sampled items. To alleviate the problem of randomness, we repeat the evaluation process five times and report the average value. §.§.§ Implementation details Based on the efficiency and performance of ResNet18 in recommendation <cit.> and prediction <cit.>, CIERec used it as the visual encoder to extract the uniform representations with the dimension of 512. The cross-modal recommendation model was optimized by Adagrad with the learning rate from 0.0001 to 0.5, and the DQN model was optimized by Adam with the learning rate from 0.00001 to 0.005. Both the batch size and the dimension of the CIERec were selected in 32, 64, 28, 256, and these optimizers are decayed proportionally for every four epochs, with the decay rate chosen from 0.1, 0.5. §.§.§ Baselines We compare CIERec with both multi-modal learning models and cold-start learning models: * MF <cit.> is a classical recommendation model to utilize the implicit feedback information with Bayesian Personalized Ranking. * VBPR <cit.> is a factorization model to incorporate pre-extracted visual features into recommendation. * MF(Image/Semantic) <cit.> is a variant of MF where we replace the collaborative embedding with visual or semantic feature. For fair comparisons, we use the same pre-extracted visual features and semantic feature for training, noted as MF(Image) and MF(Semantic). * VECF <cit.> capture the visual and textual features by a multi-modal attention network seamlessly. * HAFR-non-i <cit.> learns the user’s preference via the visual images, ingredients and the collaborative information of the interacted recipes. * PiNet <cit.> is a heterogeneous multi-task learning framework that learns visual features containing with semantic and collaborative information, which is also the baseline method for the proposed CIERec. * DropoutNet <cit.> learn a DNN-based latent model via the dropout mechanism based on the idea that cold start is equivalent to the missing data problem. * AMF <cit.> learns the effective collaborative feature via adding gradient-based perturbations to item embedding. * AMR <cit.> adds random-based perturbations and gradient-based perturbations to the pre-extracted visual features to model the effective visual information. * CLCRec <cit.> is a SOTA model in cold-start recommendation that optimizes the dependence between items' embedding and content information via contrastive learning. §.§ Performance Comparison In this section, we compare CIERec with the algorithm mentioned in section <ref> for performance comparison. For each algorithm, we have fine-tuned its hyper-parameters to obtain its best performance in the experiments. It can be observed from Table <ref> that: * MF <cit.> with content features achieves consistent improvements over MF on most datasets and metrics, whereas replacing the collaborative information only with content information from a single modality can lead to a decrease in cold-start performance (MF(Image) achieved a 4.9% decrease on Amazon_CDs and MF(Semantics) obtained a 1.9% decrease on Allrecipes), demonstrating the necessity of fine-grained processing of multi-modal information in cold-start recommendations. * PiNet <cit.> outperforms the traditional visually-aware recommendation methods (e.g., VECF <cit.> and HAFR-non-i <cit.>). This is mainly owing to the fact that it introduces semantic information in addition to visual information to regularize the learning process of the content representation. * CIERec outperforms existing multi-modal learning methods in all performance metrics, which validates that CIERec can bring significant and consistent improvement over existing recommendation methods by fusing the user representation, the collaborative representation, and the multi-modal content information obtained from cross-modal inference. * CIERec achieves higher cold-start improvements on the Amazon_CDs, which is due to its fewer classes of semantic elements. That is, the same semantic element category corresponds to more items, thus making the cross-modal inferred semantic information more representative. It also proves the importance of semantic information in cold-start recommendation. §.§ Ablation Study In addition to the overall performance comparison, we further explore the effectiveness of the combinations of components in CIERec. Specifically, we use Base, CI, TA, GR and PI to represent using the components of plain recommendation model, cross-modal inference, task aware gate, gradient regularization and privileged information respectively. The ablation results are shown in Table <ref> and we have the following findings: * For 'CI', that is, cross-modal inference without any constraints, the application of naive inference in recommendations leads to a significant degradation of its performance. This may explained by the fact that a large number of visual information is lost in the cross-modal transformation. * For 'TA' and 'GR', 'TA' facilitates the modeling of mapping relationships between heterogeneous modalities through task-aware gates in each modality; while 'GR' can help to learn the optimization direction of heterogeneous representations through gradient-regularization gates within the constraints of reinforcement learning. CIERec can further improve its performance by combining these components. * The 'PI' component is able to introduce the prior knowledge (e.g., the textual annotations of the image of the item) as the privileged information based on the existing components, which helps to enhance the model's ability to mine visual information. As such, it allows the original visual features to be fully utilized to achieve the best cold-start recommendation results. §.§ Case Study In this section, we attempt to investigate how the CIERec facilitates the learning process of the multi-modal representation in the embedding space. To this end, we randomly selected five users and items they interacted with to explore how these embeddings varied in different methods. As shown in Figure <ref>, the relevance of users and items is well reflected in the t-SNE space, namely, the more relevant representations are embedded in the more similar positions. We found that naive cross-modal inference leads to a collapse of its training procedure, whereas the embedding representations learned by CIERec show a significant clustering effect, that is, points with the same color tend to form clusters. These observations demonstrate that CIERec is able to effectively facilitate the learning process of cross-modal representations by augmenting content information, so that the representation of users and the items they interact with tend to be close to each other, which may be one of the reasons for CIERec's superior performance. § CONCLUSION This paper proposes a novel cross-modal content inference and feature enrichment recommendation framework, CIERec, which conducts the cross-modal inference from the visual space to the semantic space based on the items' prior knowledge, and combines a multi-modal representation fusion method to trade-off the heterogeneous representation modeling process from the multi-modal information. Experimental results demonstrate that the introduction of cross-modal inferred information is able to improve the items' representation from multiple perspectives, which makes CIERec superior to existing methods in cold-start recommendation. Future work of this study will focus on two main directions. First, the heterogeneous alignment techniques may help to model the mapping relationships between visual features and cross-modal inferred features at a multi-granularity level, leading to an increased information gain. Second, we will further improve the representational capability of CIERec by filtering noisy information in cross-modal inference with the graph convolutional network. § ACKNOWLEDGMENT This work is supported in part by the National Natural Science Foundation of China (Grant no. 62006141), the Excellent Youth Scholars Program of Shandong Province (Grant no. 2022HWYQ-048), and the TaiShan Scholars Program (Grant no. tsqn202211289). IEEEtran
http://arxiv.org/abs/2307.02157v1
20230705095808
Generative Job Recommendations with Large Language Model
[ "Zhi Zheng", "Zhaopeng Qiu", "Xiao Hu", "Likang Wu", "Hengshu Zhu", "Hui Xiong" ]
cs.IR
[ "cs.IR", "cs.CL" ]
Generative Job Recommendations with Large Language Model Zhi Zheng^1,2,†, Zhaopeng Qiu^1,†, Xiao Hu^1, Likang Wu^1,2, Hengshu Zhu^1*, Hui Xiong^3,4* ^†Equal Contribution. *Corresponding authors. ^1Career Science Lab, BOSS Zhipin. ^2University of Science and Technology of China, ^3The Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology. ^4The Department of Computer Science and Engineering, The Hong Kong University of Science and Technology. [email protected], [email protected], [email protected], [email protected] August 1, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The rapid development of online recruitment services has encouraged the utilization of recommender systems to streamline the job seeking process. Predominantly, current job recommendations deploy either collaborative filtering or person-job matching strategies. However, these models tend to operate as “black-box" systems and lack the capacity to offer explainable guidance to job seekers. Moreover, conventional matching-based recommendation methods are limited to retrieving and ranking existing jobs in the database, restricting their potential as comprehensive career AI advisors. To this end, here we present GIRL (GeneratIve job Recommendation based on Large language models), a novel approach inspired by recent advancements in the field of Large Language Models (LLMs). We initially employ a Supervised Fine-Tuning (SFT) strategy to instruct the LLM-based generator in crafting suitable Job Descriptions (JDs) based on the Curriculum Vitae (CV) of a job seeker. Moreover, we propose to train a model which can evaluate the matching degree between CVs and JDs as a reward model, and we use Proximal Policy Optimization (PPO)-based Reinforcement Learning (RL) method to further fine-tine the generator. This aligns the generator with recruiter feedback, tailoring the output to better meet employer preferences. In particular, GIRL serves as a job seeker-centric generative model, providing job suggestions without the need of a candidate set. This capability also enhances the performance of existing job recommendation models by supplementing job seeking features with generated content. With extensive experiments on a large-scale real-world dataset, we demonstrate the substantial effectiveness of our approach. We believe that GIRL introduces a paradigm-shifting approach to job recommendation systems, fostering a more personalized and comprehensive job-seeking experience. § INTRODUCTION Recent years have witnessed the rapid development of online recruitment. According to the report from The Insight Partners[https://www.theinsightpartners.com/reports/online-recruitment-market], the global online recruitment market size is expected to grow from $29.29 billion in 2021 to $47.31 billion by 2028. For these platforms, the Recommendation Systems (RS) which can provide valuable assistance to job seekers by recommending suitable jobs, serve as their core components. Alone this line, considerable research efforts have been made in building RS for online recruitment <cit.>. Indeed, existing studies mainly follow the collaborative filtering <cit.> or person-job matching paradigms <cit.>. However, in practical applications, these methods will encounter the following challenges. First, these methods primarily rely on end-to-end neural network models, which usually output the matching score directly given the information of a specific job seeker and a job. Nevertheless, these models suffer from poor explainability in the black-box neural network computations, which reduce the user trust, especially in scenarios like job seeking that have a significant impact on individuals. Second, most of the existing models are discriminative model, which are limited to retrieving and ranking existing jobs in the database, restricting their potential as comprehensive career AI advisors, i.e., generating a novel Job Description (JD) for a job seeker personally based on the Curriculum Vitae (CV). Last but not least, the existence of the considerable semantic gap between CVs and JDs has resulted in the underwhelming performance of traditional methods. To address the aforementioned challenges, in this paper, inspired by the recent progress in the field of Large Language Models (LLM), we propose a novel user-centered GeneratIve job Recommendation paradigm based on LLM called GIRL. As shown in Figure <ref>, different from traditional discriminative job recommendation methods which aim to predict a matching score given a specific job seeker and job, GIRL aims to directly generate a personalized JD for a specific job seeker based on the remarkable generation ability of LLMs. We propose two ways to leverage the generated job descriptions. Firstly, the job descriptions generated by the LLMs represent the job that the model deems most suitable for the job seeker. Therefore, these descriptions can provide job seekers with references for their job seeking and career development planning. Meanwhile, it can improve the explainability of the whole recommender system. Secondly, the generated results can be used to bridge the semantic gap between CVs and JDs, and further enhance the performance of traditional discriminative models. However, it is non-trival to train an LLM for job recommendation. On one hand, given the significant differences between recommendation tasks and NLP tasks, the LLM needs to incorporate more domain-specific knowledge <cit.>. On the other hand, to better assist with downstream recommendation tasks, the LLM needs to further learn from historical interaction records. Therefore, inspired by the InstructGPT <cit.>, in this paper, we propose a three-step training methodology as: * Supervised Fine-Tuning (SFT): This step aims to teach the LLM how to generate an appropriate JD based on a given CV. Specifically, we build a dataset consisting of previously matched CV-JD pairs, and use the intruction-tuning method to train the LLM generator. * Reward Model Training (RMT): In this step, we build a dataset consists of matched and mismatched CV-JD pairs, which contains the recruiter feedback for the job seekers. Then, we train a reward model to distinguish the matched CV-JD pairs from mismatched ones to mimic the real-world recruiter. * Reinforcement Learning from Recruiter Feedback (RLRF): In step three, we leverage Proximal Policy Optimization (PPO) based reinforcement learning method to further align the LLM to the recruiter preference captured by the reward model, making the LLM generation consider not only the preference of the job seeker but also the practical market demands. Finally, the major contribution of this article can be summarized as follows: * To the best of our knowledge, this is the first piece of work which proposes an LLM-based generative job recommendation paradigm. * We propose a novel three-step training methodology with reinforcement learning from recruiter feedback to train a job description generator. * We evaluated the quality of the generated results with the help of GhatGPT[https://chat.openai.com/], and we further conducted extensive experiments on real-world dataset. § RELATED WORK In this section, we will summarize the related works in the following three categories, respectively job recommendation, large language models, and LLMs for recommendation. §.§ Job Recommendation In the era of burgeoning online job platforms, a variety of novel job recommendation techniques have been introduced. These approaches can be primarily divided into two categories, respectively text-based methods and behavior-based methods. For text-based methods, PJFNN <cit.> formulated this task as a joint representation learning problem and utilized CNN-based models to get the representation of job seekers and recruiters, while APJFNN <cit.> enhanced the above model by taking the abilities of job seekers into consideration and used attention mechanisms for hierarchical ability-aware representation. IPJF <cit.> conceived an interpretable model to match job seekers and recruiters in a multi-task learning framework. For behavior-based methods, DPGNN <cit.> proposed to build an interaction graph between job seekers and recruiters to model the directed interactions. DPJF-MBS <cit.> proposed to utilize memory networks to get the representation of the multi-behavior sequences of different job seekers and recruiters. §.§ Large Language Models Large Language Models (LLMs) are language models consisting of a neural network with many parameters (tens of millions to even trillions), and trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning methods <cit.>. Large language models primarily rely on the Transformer <cit.> architecture, which has become the standard deep learning technique for Natural Language Processing (NLP). Existing LLMs can primarily be divided into two categories, respectively discriminative LLMs and generative LLMs. For discriminative LLMs, BERT <cit.> proposed a deep bidirectional transformer architecture, and further proposed a Masked Language Model (MLM) objective for model pre-training. Roberta <cit.> further refined the training process of BERT and achiever better performance. XLNet <cit.> leveraged the permutation of the sequence order, enabling it to learn the context of a word based on all the words before and after it in a sentence. For generative LLMs, GPT <cit.> proposed to improve language understanding by generative pre-training. During pre-training, the model learns to predict the next word in a sentence, without any specific task in mind. GPT-2 <cit.> and GPT-3 <cit.> further increased the model scale and achieved better performance. InstructGPT <cit.> further proposed to fine-tune the GPT model using reinforcement learning from human feedback. Inspired by the above studies, in this paper, we also use reinforcement learning to fine-tune the JD generator and we use BERT for text embedding. §.§ LLMs for Recommendation LLMs have recently gained significant attention in the domain of recommendation systems <cit.>. Generally, existing studies can be divided into two categories, respectively recommendation based on discriminative LLMs and generative LLMs. For the former category, U-BERT <cit.> proposed a novel pre-training and fine-tuning method to leverage BERT for recommendation tasks. BERT4Rec <cit.> proposed to utilize BERT-based deep bidirectional self-attention architecture to model user behavior sequences. For the latter category, some studies focus on utilizing the zero/few shot abilities of LLMs, and use the LLMs for recommendation by prompting without fine-tuning <cit.>. Moreover, some studies further fine-tune the LLMs, endeavoring to achieve better performance. For example, TALLRec <cit.> proposed to fine-tuned the LLMs by recommendation tuning, where the input is the historical sequence of users and the output is the "yes or no" feedback. InstructRec <cit.> designed 39 instruction templates and automatically generated a large amount of instruction data for instruction tuning. § PROBLEM DEFINITION Here we introduce the problem formulation of generative job recommendation and generation-enhanced job recommendation. Let 𝒮 and 𝒥 denote the entire job seeker set and job set. The feedback matrix between the job seekers and recruiters is denoted as 𝒵∈ℝ^N_s× N_j, where z_s,j=1 means both job seeker s and the recruiter of job j are satisfied with each other and this pair is matched, and z_s,j=0 means this pair is mismatched. N_s and N_j denote the numbers of the job seekers and jobs, respectively. Furthermore, each job seeker s has a corresponding CV which can be formatted as C=[w_1,…,w_l_s], where w_i is the i-th word in C and l_s is the length of C. Similarly, each job j has a corresponding JD which can be formatted as J=[v_1,…,v_l_j], where v_i is the i-th word in J and l_j is the length of J. Note that we omit some subscripts to facilitate the reading. Traditionally, the objective of discriminative job recommendation is to train a scoring model that can compute the matching score between a given job seeker s and a job j. However, this traditional paradigm can only recommend existing jobs for job seekers, which may not fulfill the needs of some job seekers. Therefore, in this paper, we propose a novel generative job recommendation paradigm which can be formulated as: [Generative Job Recommendation] Given a job seeker s with the corresponding C, the goal of generative job recommendation is to train a generator 𝒢, which can generate a suitable JD for this user, i.e., 𝒢 :C → J^'. In the aforementioned definitions, the generated J^' should has high quality and encompassing the most suitable job information for job seeker s, thereby providing meaningful guidance for s. Furthermore, in this paper, we propose that J^' can also serve as a synopsis of job seeker s, contributing auxiliary support to traditional recommendation tasks. Along this line, the generation-enhanced job recommendation can be formulated as: [Generation-Enhanced Job Recommendation] Given a job seeker s with the corresponding C, a job j with the corresponding J, and the generated J^', the goal of generation-enhanced job recommendation is to train a model ℳ, which can calculate the matching score between s and j, i.e., ℳ:C,J,J^'→ℝ. § GENERATIVE RECOMMENDATION FRAMEWORK As shown in Figure <ref>, the generative recommendation framework is based on a large language model and consists of three training steps. Specifically, we first convert the JD recommendation task to the NLG format with the manually designed prompt template, and utilize supervised fine-tuning to make the LLM generator understand the recommendation task. Second, we train a reward model to learn the recruiter feedback and capture the interaction information. Third, we utilize reinforcement learning to further align the generator with the recruiting market. We will address the details of all steps in the following sub-sections. §.§ Supervised Fine-tuning In this training step, we propose to train the generator in the supervised fine-tuning way based on the matched CV-JD pairs. First, given a specific job seeker s with the CV C and a job j with the JD J, we first build a prompt T to describe the generation task as shown in Figure <ref>. To maintain consistency with the training data, the original prompt is in Chinese. However, for better illustration, we have translated it to English in Figure <ref>. The prompt template consists of the following four parts: * Role: the green words, which aims to keep consistence with the instruction-tuning data of the our used backbone. * Instruction: the black words, which describes the generation task via the human natural language. * Input: the blue words, which contains the information of the job seeker. * Output: the black words, which is the generation target, i.e., the JD text. Note that this part will be blank in the inference phase. Then, we propose to train the generator with the casual language model pre-training task. Specifically, given the generator 𝒢, the CV C, and the prompt template T, we optimize the negative log-likelihood for generating the JD J as: ℒ_sft = - log(C|J,T,𝒢) = -∑_i=1^|l_j|log(v_i|v_<i,C,T,𝒢), where l_j is the length of J, v_i is the i-th word in J. (C|J,T,𝒢) denotes the generation probability for J of the generator model 𝒢 given the job seeker feature C and the prompt template T. §.§ Reward Model Training In this training step, our aim is to train a reward model 𝒰 that can predict the matching score between a CV-JD pair, i.e., 𝒰: (C, J) →ℝ. The architecture of 𝒰 is similar to that of the generator model 𝒢, but it has a linear prediction head that outputs scalar values. Additionally, the parameter scale of 𝒰 is smaller than that of 𝒢. To train the reward model 𝒰, we collect pairwise training data and construct a ranking task. Typically, a job seeker applies for multiple jobs simultaneously and receives different feedback (matched or rejected) from recruiters. Therefore, we select a matched job J^+ and a mismatched job J^- for each CV C to construct comparable pairs. We then optimize the pairwise ranking loss to train 𝒰 as follows: ℒ_rmt = logσ (𝒰(C, J^+) - 𝒰(C, J^-)), where σ denotes the Sigmoid activation function. This approach enables the reward model to capture the market preferences for job seekers based on the feedback from recruiters. Moreover, we can use the reward model to predict the matching score between a job seeker and a generated job description, thereby verifying the suitability of the recommendation in advance. §.§ Reinforcement Learning In this stage, we aim to improve the alignment between the generator 𝒢 and the recruiter feedback acquired by the reward model 𝒰 through reinforcement learning. Drawing inspiration from InstructGPT <cit.>, we employ the Proximal Policy Optimization (PPO) <cit.> algorithm to facilitate this alignment process. Specifically, we first utilize the generator 𝒢 and the reward model 𝒰 obtained from the first two training steps to initialize the actor-critic model, comprising the actor model 𝒢^a and critic model 𝒰^c. Next, we collect a RL training dataset, which only consists of the CVs of job seekers which do not appear in the first two stages. Then, we use the PPO algorithm to train the actor-critic model based on these CVs while freezing the generator and the reward model. Finally, we use the actor as the new generator model. The entire optimization algorithm is an iterative process and the ensuing sub-sections expound on the details of an iteration. §.§.§ Job Description Generation We first samples some CVs 𝒞^r from the training data and then leverage the actor model 𝒢^a to generate JDs 𝒥^r={𝒢^a(C)| C ∈𝒞^r} for these samples. For simplicity, we take the i-th sample 𝒞^r_i with its corresponding generated JD 𝒥^r_i as the example to illustrate the following calculation steps. §.§.§ KL Divergence Computation To ensure the convergence and stability of the RL algorithm, the PPO algorithm uses KL divergence to limit the range of changes in the policy during each update. The KL divergence is a metric for measuring the difference between the current policy, i.e., the actor model 𝒢^a, and the old policy, i.e., 𝒢. Specifically, given the pair of CV 𝒞^r_i and generated JD 𝒥^r_i, we can estimate the KL divergence as follows: KL(𝒞^r_i, 𝒥^r_i) = 1/|𝒥^r_i|∑_v_j∈𝒥^r_i ( CE(v_i,j) - 1 -logCE(v_i,j) ), CE(v_j) = (v_j|v_i,<j,C,𝒢^a)/(v_j|v_i,<j,C,𝒢), where v_i,j and v_i,<j denote the j-th token and first (j-1) tokens of the JD 𝒥^r_i, respectively. §.§.§ Reward and Advantages Computation The final reward consists of two different parts, respectively the matching score predicted by the reward model and the KL divergence, and can be fomulated as follows: r_i = 𝒰(𝒞^r_i, 𝒥^r_i) - λKL(𝒞^r_i, 𝒥^r_i), where λ is the coefficient of the KL divergence. Furthermore, the advantage value is the difference between the reward and the value of the input CV estimated by the critic model as: a_i = r_i - 𝒰^c(𝒞^r_i, _). §.§.§ Actor Model Optimization After obtaining the above values, we can finally calculate the policy loss, i.e., the loss of actor model. Here, we use the importance sampling and clip tricks to estimate the loss as: ℒ_am = 1/|𝒥^r_i|∑_v_j∈𝒥^r_imin( CE(v_i,j)a_i, clip(CE(v_i,j))a_i ), clip(CE(v_i,j)) = 1 + ϵ, CE(v_i,j) > 1 + ϵ CE(v_i,j), 1 - ϵ < CE(v_i,j) < 1 + ϵ 1 - ϵ. CE(v_i,j) < 1 - ϵ §.§.§ Critic Model Optimization The critic model loss is the MSE loss between the reward value and the estimated state value as: ℒ_cm = (r_i - 𝒰^c(𝒞^r_i, _))^2 The above five steps constitute one iteration of the optimization process. Through minimizing the actor loss and critic loss, we can optimize two models. In the RL process, the reward model and the generator model are froze. Moreover, the whole RL process are shown in Algorithm <ref>. § GENERATION-ENHANCED RECOMMENDATION FRAMEWORK In Section <ref> we introduced the paradigm of generative job recommendation. As we mentioned before, given a CV corresponding to a job seeker, we can utilize LLMs to generate the most suitable JD, thereby providing career development guidance for this job seeker. Furthermore, in this paper, we propose that we can actually regard the above paradigm as a feature extraction process, which can further enhance the performance of traditional discriminative recommendation methods. In this section, we delve into the details of how to leverage the generated results provided by LLMs for enhanced job recommendation. §.§ Basic Recommendation Model As shown in Figure <ref> (a), in the paradigm of discriminative recommendation based on text matching, given a job seeker s with the corresponding CV C, and a job j with the corresponding JD J, we first need to get the text embedding based on a text encoder as: 𝐜 = Encoder(C),  𝐣 = Encoder(J). Then, we can get the matching score by feeding the above embedding vectors to a predictor. In this paper, we studied two different predictors, respectively MLP predictor as: score = MLP([𝐜;𝐣]), where [;] is the concatenation of two vectors, and dot predictor as follows: score = 𝐜·𝐣, where · calculates the dot product of two vectors. §.§ Enhanced Recommendation Model As shown in Figure <ref> (c), in the paradigm of generation-enhanced job recommendation, we can get the generated JD J^' based on the CV C and the LLM-based generator 𝒢. Then, we can also get the text embedding of J^' as: 𝐣^' = Encoder(J^'). After that, we propose two different ways to utilize 𝐣^' for enhancing the recommendation task corresponding to different predictor. Specifically, for the MLP predictor, we propose to calculate the matching score as: score = MLP([𝐜;𝐣;𝐣^']). For the dot predictor, we first get the enhanced job seeker embedding as: 𝐜^' = MLP([𝐜;𝐣^']). Then, we can calculate the dot product as: score = 𝐜^'·𝐣. § EXPERIMENTS In this section, we first describe the dataset used in this paper. Then, we propose to evaluate our approch from two different perspectives. We further present some discussions and case studies on generative job recommendation. The experiments are mainly designed to answer the research questions as follows: * RQ1: Can our LLM-based generator generate high-quality JDs? * RQ2: Can the generated results enhance the performance of discriminative job recommendation? * RQ3: Whether the specially designed training methods for the LLM effective? * RQ4: How do different settings influence the effectiveness of our model? §.§ Data Description and Preprocessing The real-world datasets used in this paper comes from one of the largest online recruitment platform in China. In our datasets, each job seeker and recruiter is de-linked from the production system by securely hashing with one-time salt mapping. In this platform, each job seeker has a Curriculum Vitae (CV), encompassing their basic demographic information, educational background, and work experience among other details. Meanwhile, each job is associated with a Job Description (JD), detailing the responsibilities of the role, the compensation package, and so on. A variety of interaction types may occur between job seekers and jobs, such as browsing, applying, and matched. In this paper, we categorize these interactions into two major types, respectively matched and mismatched. To train a large language model for generative job recommendation, we built the following three dataset: * Supervised Fine-tuning Dataset: This dataset contains multiple matched CV-JD pairs, ranging from Apr. 1, 2023, to Apr. 30, 2023. * Reward Model Training Dataset: This dataset contains multiple matched and mismatched CV-JD pairs, ranging from May. 1, 2023 to May. 7, 2023. * Reinforcement Learning Dataset: This dataset contains CVs only, ranging from May. 8, 2023, to May. 10, 2023. Furthermore, to evaluate whether the generated results can enhance the performance of traditional discriminative models, we built the following dataset: * Enhanced Recommendation Dataset: This dataset contains multiple matched and mismatched CV-JD pairs, ranging from May. 8, 2023 to May. 31, 2023. Detailed statistics of the above datasets are shown in Table <ref>. §.§ Evaluation and Baselines In this paper, we propose to evaluate the effectiveness of our GIRL approach from the following two perspectives. Firstly, with the assistance of ChatGPT, we evaluated the quality of the generated results from semantic perspective. Secondly, we evaluated whether the generated results can enhance the performance of discriminative recommendation. For generation quality evaluation, we first selected several baseline methods to compare with our method as: * GIRL: This is the method proposed in this paper which utilized both SFT and RL for fine-tuning. * GIRL-SFT: This method is a simplified variant GIRL which only utilized SFT for fine-tuning. * Other LLMs: BELLE-7b <cit.>, BLOOMZ-7b <cit.>, LLAMA-7b <cit.>. Furthermore, we propose to utilize ChatGPT as the evaluator to compare the generation quality of these methods. Specifically, we first input the CV and two different JDs generated by two different methods into the prompt. We then request ChatGPT to evaluate the results from the following three different perspectives: * Level of details: Whether the generated JD contains enough necessary information about the job. * Relevance: Whether the generated JD is suitable for the job seeker. * Conciseness: Whether the generated JD is fluid and has high readability. The detailed prompt template <cit.> for generation quality is shown in Figure <ref>, from which we can find that the output results of ChatGPT can be divided into three categories, respectively “Win", “Tie", and “Lose". Based on the output results, given the dataset for generation quality evaluation, we selected “Win Rate (Win)", “Tie Rate (Tie)", and “Lose Rate (Lose)", which is obtained by calculating the proportion of the above three results, as three different evaluation metrics. Note that we use boot strapping <cit.> strategy to avoid the position bias when using ChatGPT as the ranker. Furthermore, we define “Advantage (Adv.)", which is the difference between “Win Rate" and “Lose Rate", as another evaluation metrics to reflect the relative improvement. For evaluating the effectiveness of the generated results for recommendation enhancement, we selected several baseline methods to compare with our method as: * Base: This method is a traditional two-tower text matching model as shown in Figure <ref> (a). We chose BERT <cit.> as the text encoder for getting the CV and JD embedding. * GIRL-SFT: As shown in Figure <ref> (c), this method uses the generated JDs for recommendation enhancement. Only SFT is used for fine-tuning the LLM. * GIRL: This method uses the generated JDs for recommendation enhancement. Both SFT and RL are used for fine-tuning the LLM. Note that as we mentioned is Section <ref>, we proposed two different methods for the predictor, respectively MLP and Dot. We will test the performance of different models with these two different predictors. We selected AUC and LogLoss as the evaluation metric for the enhanced recommendation task. §.§ Performance of Generation Quality (RQ1,RQ3) To validate the quality of the JDs generated by our model, we first built a evaluation set with 200 different CVs which do not appear in other dataset. Then, we compared GIRL with all the baseline models on this dataset, and the results are shown in Table <ref>. From the results, we can get the following observations: * The performance of the BELLE model significantly surpasses that of LLaMA and BLOOMZ. This underlines that instruction-tuning with instructions on Chinese datasets can substantially enhance the quality of the outputs in Chinese. * Both GIRL and GIRL-SFT outperform all the baseline methods, emphasizing the necessity of instruction tuning on domain-specific data. * GIRL exceeds GIRL-SFT in performance, demonstrating that reinforcement learning can better align the results generated by the LLMs with human preferences, thereby improving the quality of generated results. §.§ Performance of Enhanced Recommendaion (RQ2,RQ3) To demonstrate the effectiveness of the generation results for enhancing the discriminative recommendation task, we compare GIRL with all the baseline methods, and the results are shown in Table <ref>. From the results, we can get the following observations: * Both GIRL and GIRL-SFT outperform the Base model, demonstrating that the JDs generated by fine-tuned LLMs can effectively enhance the performance of discriminative job recommendation. * GIRL surpasses GIRL-SFT on all the evaluation metrics. The rationale behind this is that through the reward model training stage, our reward model encapsulates extensive real-world experiences. By incorporating this knowledge into the LLMs through reinforcement learning, the generated JDs are enabled to capture job-seeker traits precisely and align with the preferences of recruiters better. §.§ Discussion on Generation Number (RQ4) In Section <ref> we studied how to utilize the generated JD for enhancing discriminative recommendation, where we focus on utilizing a single JD. Indeed, owing to the inherent randomness in the text generation process, given a specific CV, the LLM is capable of generating multiple distinct JDs. In this section, we will explore how to utilize multiple generated JDs and discuss the influence of the number of JDs on the model performance. Specifically, given multiple JDs, we first get the text embedding of each JD by Equation <ref>. Then, we use mean pooling to fuse these JD embedding, and calculate the matching score following Equation <ref>. The results are shown in Figure <ref>. Note that we employed only 75% of the data in Section <ref> to accelerate the computation process. From the results, we can find that as the number of generated JDs increases, the model performance initially improves before subsequently declining. This suggests that moderately increasing the number of generated JDs can further enhance model performance. However, a larger number of JDs also implies a substantial increase in computational cost. Moreover, the performance of GIRL surpasses that of GIRL-SFT in most cases, which once again affirms the superiority of the RL-based fine-tuning method proposed in this paper. §.§ Discussion on Cold Start (RQ4) In this section, we will explore the performance of different models under cold-start condition on the discriminative job recommendation task. Specifically, cold start condition refers to recommending jobs for job seekers who have not appeared in the training set. The results are shown in Table <ref>. Compared with Table <ref>, we can find that the performance improvement of our models in cold-start conditions is more significant. This indicates that the JDs generated by LLMs can more effectively assist discriminative recommendation models in enhancing performance under cold-start conditions. §.§ Case Study In this section, we will conduct a case study of the generated results from different models for the same CV, and the results are shown in Figure <ref>. From the results we can find that the vanilla BELLE model without finetuning fails to generate job JDs in a standard format, and the generated JDs present vague descriptions of job-related skills and requirements, providing inadequate guidance for job seekers. Moreover, we can find that after being trained through reinforcement learning, the GIRL model generates results that are more standardized in format, more detailed and comprehensive in content, and more aligned with the individual circumstances of job seekers. The above results demonstrate the effectiveness of the three-stage training method proposed in this paper. § CONCLUSION Reflecting on the recent advancements in the field of Large Language Models, this study presented a novel generative job recommendation paradigm named GeneratIve job Recommendation based on Large language model (GIRL). Specifically, we first utilized supervised fine-tuning to guide the LLM-based generator in creating an appropriate job description given a specific curriculum vitae. Subsequently, we developed a reward model predicated on feedback from recruiters, and then implemented a proximal policy optimization based reinforcement learning methodology to synchronize the generator with recruiter preferences. Furthermore, we proposed to enhance the job seeker features by the generated results, aiming to improve the performance of the discriminative job recommendation model. The series of experiments conducted on a real-world dataset from a large-scale online recruitment platform provided substantial evidence of the effectiveness of our proposed approach. IEEEtran
http://arxiv.org/abs/2307.00286v1
20230701094759
CMA-ES for Post Hoc Ensembling in AutoML: A Great Success and Salvageable Failure
[ "Lennart Purucker", "Joeran Beel" ]
cs.LG
[ "cs.LG", "cs.NE", "I.2.6; I.5.1" ]
Null controllability of a kind of n-dimensional degenerate parabolic equation Hongli Sun^1, Yuanhang Liu^2, Weijia Wu^2,*, Donghui Yang^2 ^1 School of Mathematics, Physics and Big data, Chongqing University of Science and Technology, Chongging 401331, China ^2 School of Mathematics and Statistics, Central South University, Changsha 410083, China =================================================================================================================================================================================================================================================================================================== Many state-of-the-art automated machine learning (AutoML) systems use greedy ensemble selection (GES) by Caruana et al. (2004) to ensemble models found during model selection post hoc. Thereby, boosting predictive performance and likely following Auto-Sklearn 1's insight that alternatives, like stacking or gradient-free numerical optimization, overfit. Overfitting in Auto-Sklearn 1 is much more likely than in other AutoML systems because it uses only low-quality validation data for post hoc ensembling. Therefore, we were motivated to analyze whether Auto-Sklearn 1's insight holds true for systems with higher-quality validation data. Consequently, we compared the performance of covariance matrix adaptation evolution strategy (CMA-ES), state-of-the-art gradient-free numerical optimization, to GES on the 71 classification datasets from the AutoML benchmark for AutoGluon. We found that Auto-Sklearn's insight depends on the chosen metric. For the metric ROC AUC, CMA-ES overfits drastically and is outperformed by GES – statistically significantly for multi-class classification. For the metric balanced accuracy, CMA-ES does not overfit and outperforms GES significantly. Motivated by the successful application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES from overfitting for ROC AUC. We propose a method to normalize the weights produced by CMA-ES, inspired by GES, that avoids overfitting for CMA-ES and makes CMA-ES perform better than or similar to GES for ROC AUC. § INTRODUCTION Auto-Sklearn <cit.> was the first automated machine learning (AutoML) system to discover that building an ensemble of models found during model selection is possible in an efficient manner and superior in predictive performance to the single best model. Afterwards, several other AutoML systems also build an ensemble post hoc: AutoGluon <cit.>, Auto-Pytorch <cit.>, MLJAR <cit.>, and H2O AutoML <cit.> all implemented post hoc ensembling. Besides H2O AutoML, all of these systems implemented greedy ensemble selection (GES) <cit.>, a greedy search for a weight vector to aggregate the predictions of base models. In AutoML systems, GES is trained using the base models' predictions on the validation data, which are computed while evaluating a base model during model selection. The frequent usage of GES likely follows Auto-Sklearn's reported insight that alternatives like stacking <cit.> or gradient-free numerical optimization overfit and are more costly than GES. Auto-Sklearn 1, by default, only has limited validation data for post hoc ensembling, that is, a 33% hold-out split of the training data. We deem this to be low-quality validation data because, depending on the dataset, 33% are not enough instances to avoid overfitting while training GES. Hence, we were motivated to analyze if Auto-Sklearn's insight also holds true for an AutoML system with higher-quality validation data, e.g., AutoGluon with n-repeated k-fold cross-validation. Moreover, we were motivated to focus on gradient-free numerical optimization instead of stacking. Stacking is generally well-known in ensembling for machine learning and is used by H2O AutoML for post hoc ensembling. In contrast, gradient-free numerical optimization has not been used so far. Thus, we compare the performance of GES to covariance matrix adaptation evolution strategy (CMA-ES) <cit.>, state-of-the-art gradient-free numerical optimization <cit.>. We chose CMA-ES due to its widespread usage in numerical optimization <cit.>. Moreover, CMA-ES's update is efficient and therefore enables fast training in post hoc ensembling; similar to GES's training. Furthermore, the function evaluation in post hoc ensembling, i.e., calculating the score of aggregated predictions, takes seconds <cit.>. Thus, we disregarded Bayesian optimization, which is appropriate for tasks with expensive function evaluation such as hyperparameter optimization <cit.>. In this study, we aim to boost the predictive performance as much as possible with post hoc ensembling. Note that GES selects a small ensemble, while methods like gradient-free numerical optimization or stacking produce an ensemble that includes all base models. Thus, the inference time and size of the final model are larger for the latter two than for GES. Our first contribution is an application of CMA-ES for AutoGluon on the 71 classification datasets from the AutoML Benchmark <cit.>. Thereby, we show that Auto-Sklearn's insight w.r.t. overfitting of gradient-free numerical optimization depends on the chosen metric. We contradict the insight for the metric balanced accuracy by showing that CMA-ES statistically significantly outperforms GES. And we confirm the insight for the metric ROC AUC by showing that GES outperforms CMA-ES due to overfitting. As a follow-up, our second contribution is a method to avoid overfitting for CMA-ES. Motivated by the successful application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES from overfitting to salvage CMA-ES for ROC AUC. We identified the chosen method to normalize the ensemble's prediction probabilities as the key to avoiding overfitting. With this knowledge, we propose a novel normalization method, inspired by GES's implicit constraints during optimization, that makes CMA-ES perform as well as GES and avoids overfitting for ROC AUC. Interestingly, our normalization method also enables us to keep the size of the ensemble small. Our code and data are publicly available: see Appendix <ref> for details. § RELATED WORK Besides Auto-Sklearn 1's <cit.> statement related to post hoc ensembling, only H2O AutoML names theoretical guarantees <cit.> as the reason for using stacking, but does not comment on GES. In general, details about post hoc ensembling in publications about AutoML systems were only a short comment without experiments or a reference to Auto-Sklearn 1 <cit.>. We are only aware of the work by <cit.>, which proposed a first benchmark and framework for post hoc ensembling. The results in their Appendix also showed that GES can outperform stacking. To the best of our knowledge, no other work on post hoc ensembling for AutoML exists. CMA-ES was previously applied to machine learning problems like hyperparameter optimization <cit.> or feature weighting <cit.>[To the best of our knowledge, this work is not available in English. We read a machine-translated version.] . However, we found no work that used CMA-ES to directly optimizes the weights of an ensemble. Likewise, we have found no work that applies normalization to the solutions produced by CMA-ES nor comparable machine learning methods that apply normalization in this way to combat overfitting. § APPLICATION OF CMA-ES FOR POST HOC ENSEMBLING In our application of CMA-ES for post hoc ensembling, we search for an optimal weight vector W = (w_1, ..., w_m) to aggregate pool P of m base models that minimizes a user-defined loss L(P, W). Thereby, L aggregates the predictions of models in P by taking the W-weighted arithmetic mean. Hence, we employ CMA-ES, as implemented in pycma <cit.>, with default values to find W by minimizing L. Following GES's first iteration, we set the initial solution x_0 to be the weight vector representing the single best model, that is, the weight for the single best model is one while all other models are weighted zero. The initial standard deviation is 0.2 following the intuition that a good weight vector might be close to the initial solution and that the granularity of weights can be small, e.g., between 0 and 1, like in GES. §.§ Experiments: CMA-ES vs. GES We compared CMA-ES to GES w.r.t. ROC AUC following the AutoML Benchmark <cit.>. ROC AUC requires prediction probabilities and is independent of a decision threshold that would transform prediction probabilities into labels. We use macro average one-vs-rest ROC AUC for multiclass. We complemented the comparison by also evaluating w.r.t. balanced accuracy, which requires predicted labels and, thus, depends on a decision threshold. For a threshold-dependent metric, the prediction of CMA-ES is, in our application, the class with the highest value after aggregating the prediction probabilities with the W-weighted mean. For a threshold-independent metric, we transform the aggregated probabilities for each instance using the softmax function, i.e., we treat the aggregated probabilities of each class as decision functions and take their softmax. Otherwise, the aggregated probabilities would not represent prediction probabilities, as W can have negative or positive values of any granularity. To compare the ensembling methods, we obtained base models and their validation data with AutoGluon <cit.> for each fold of the 71 classification datasets from the AutoML benchmark (AMLB) <cit.> – for both metrics. Then, per fold, we trained the ensemble methods on the validation data, i.e., search for W, and scored them on validation and test. The final validation/test score of a method for a dataset is the average over the 10 folds. Following the AMLB, we ran AutoGluon for 4 hours with 8 cores (AMD EPYC 7452 CPU) and 32 GB of memory. We increased the memory for several datasets to 64 or 128 GB to avoid that insufficient memory made it impossible to produce multiple base models. In the end, AutoGluon produced between 2 and 24 base models, see Appendix <ref> for details per dataset and metric. We used the same resources and hardware to train and evaluate the ensemble methods. However, instead of training ensemble methods for 4 hours, we follow Auto-Sklearn's default and stop training GES after 50 iterations. This results in m*50 total evaluations of L by GES. Therefore, we terminated CMA-ES after m*50 evaluations of L. We included the single best base model (SingleBest) in the comparison as a baseline. To evaluate the statistical difference between the methods, we perform a Friedman test with a Nemenyi post hoc test (α = 0.05), following the AMLB. See Appendix <ref> for more details on the statistical tests. §.§ Results: CMA-ES vs. GES We split the results for binary and multi-class classification in all our evaluations following the AutoML Benchmark <cit.>. Figure <ref> shows the mean rank and results of the statistical test with critical difference (CD) plots. The Friedman tests were significant in all our experiments. We observe that CMA-ES is statistically significantly better than GES for balanced accuracy but fails to perform similarly well for ROC AUC. To analyze the impact of overfitting on this outcome, we inspect the change of the mean rank of CMA-ES when switching from validation to test data for both metrics, see Table <ref>. A detailed overview for all methods can be found in Appendix <ref>. While the single best is always ranked last, GES overtakes CMA-ES when switching from validation to test data for ROC AUC. Notably, CMA-ES has a mean rank of almost 1 for validation data in 3 out of 4 cases. On validation data, GES is only competitive for multi-class ROC AUC, where it has a mean rank of 1.6. Nevertheless, GES has a larger distance to the single best on validation for balanced accuracy than it has for test data with a mean rank of ∼2 against the single best's ∼3. In summary, we conclude that Auto-Sklearn's insight w.r.t. overfitting does not generalize to an AutoML system with higher-quality validation data, i.e., AutoGluon, for balanced accuracy. In contrast, the insight holds for ROC AUC. Furthermore, we observe that CMA-ES is able to achieve peak performance for ROC AUC on validation data. § NORMALIZATION TO COMBAT OVERFITTING The results we just presented motivated us to salvage CMA-ES for ROC AUC. Due to its good performance for ROC AUC and its wide adaptation by AutoML systems, we decided to analyze GES to determine how to avoid overfitting. As a result, we found two properties that inspired our approach to salvage CMA-ES for ROC AUC. This section describes why and how we use normalization to combat overfitting for a threshold-independent metric like ROC AUC. Since our approach is inspired by GES, we start with preliminaries regarding GES and its properties. §.§ Preliminaries Greedy ensemble selection with replacement <cit.> performs an iterative greedy search to build a list of (repeated) base models, the ensemble E, that minimizes a user-defined loss function. In each iteration, the base model minimizing the loss, when added to E, is selected to be part of E. To produce predictions and evaluate any E, the (repeated) predictions of all base models in E are aggregated with the arithmetic mean. Taking the arithmetic mean of E weights base models that exit multiple times higher. Hence, given E, we can compute a weight vector. Assuming we run GES for N iterations[ We always denote N as the number of the iteration the final E was found in. Depending on the implementation of GES, the final E does not need to be from the final iteration. ], then |E| = N and we compute the weight vector using: W^pDisc = [countIn(p_i, E)/N | p_i ∈ P ]. While analysing GES, we found two constraints of the weight vector W^pDisc that we believe to be essential for its performance. That is, W^pDisc is pseudo-discrete and sparse. Both properties are only implicitly respected by GES and were, to the best of our knowledge, never formally defined. Pseudo-Discrete We call W^pDisc pseudo-discrete because one can transform every weight vector produced by GES into a discrete count of how often a base model has been selected. This can be done by multiplying W^pDisc with N, reversing Equation <ref>. In fact, every weight vector produced by GES is in the set 𝒢 = {W' | W' ∈ H(N) and ∑_i=1^m w_i = 1} with H(N) the m-fold Cartesian product of {0, 1/N, 2/N, . . . , 1}: H(N) = {0, 1/N, 2/N, … , 1}×⋯×{0, 1/N, 2/N, …, 1}. In other words, every weight w_i ∈ W^pDisc can be expressed as a positive fraction with denominator N, and the weight vector sums to 1. This follows from GES iteratively building a list of base models E and calculating the final weight vector with Equation <ref>. We would like to remark that this formulation of GES is very similar to mallows' model average (MMA) <cit.> and that GES might share MMA's asymptotic guarantees for regression if L is the squared error <cit.>. Sparse W^pDisc is sparse, that is, a weight vector where many models are assigned zero weight – as intended for an ensemble selection approach <cit.>. To the best of our knowledge, a guarantee for sparseness was never formally introduced or proven for (greedy) ensemble selection, cf. <cit.>. Here, we shortly provide an argument for why it is likely that GES produces a sparse weight vector: GES only adds new base models to E if they reduce the loss. Hence, it would require at least m iterations where adding a new base model would reduce the loss more than adding an existing base model again (increasing its weight). As a result, for appropriate values for m and N, it is unlikely that enough iterations happened such that each model was able to reduce the loss once. Auto-Sklearn, for example, uses m = 50 and N = 50 by default. Moreover, once E becomes large, the changes to the aggregated prediction that are induced by adding a new base model are minimal. Thus, it also becomes less likely that the changes result in a different loss. Additonally, the larger E is, the more likely GES has reached a (local) optimum, which can not be improved upon by adding new models. In short, the iterative greedy approach to add models to E likely makes W^pDisc sparse. §.§ Motivation Since all solutions produced by GES are pseudo-discrete and (likely) sparse, and since GES does not seem to overfit, we hypothesized that both properties might help to avoid overfitting. Note, the properties can be seen as constraints. They constrain the weight vector to be sparse, sum to 1, and contain only values such that 0 ≤ w_i ≤ 1. In contrast, our application of CMA-ES uses no such constraints. By default, CMA-ES produces a continuous and dense vector which does not need to sum to 1 and may contain negative or positive values of any granularity. Thus, our first idea was to constrain the optimization process of CMA-ES such that it would produce results that match the constraints of GES. However, we found that once the same constraints are introduced, CMA-ES often violates the constraints; making CMA-ES inefficient and often leading to an endless loop due to rejection sampling. In other words, we were not able to make CMA-ES produces solution vectors that fulfill all constraints of GES. In general, constraining CMA-ES is also not trivial <cit.>, and we leave more sophisticated approaches to constrain CMA-ES for post hoc ensembling, like methods based on repair-and-inject or penalization <cit.> or with relaxed constraints, to future work. Instead of constraining the optimization process of CMA-ES, we moved to adding the constraints directly to the weight vector when they are evaluated, following a concept observed from GES. That is, we observed that while the constraints of GES are an implicit result of the algorithm as defined by <cit.>, they manifested explicitly only when one computes the weight vector with Equation <ref>. The optimization loop of GES, i.e., iteratively building E, does not explicitly consider these constraints, but only greedily minimizes a user-defined loss. In other words, the optimizer is only implicitly constrained by applying constraints during the computation of the weight vector; before evaluating the vector's performance. In detail, every time GES computes the loss for an ensemble E, it first transforms E into W^pDisc using Equation <ref>. Thereby, applying the constraints that the resulting weight vector must sum to 1, is sparse, and 0 ≤ w_i ≤ 1. Then, the L(P, W^pDisc) is returned as the loss of E. At this point, it becomes clear that changing Equation <ref> leads to different constraints; the loss of E could change without touching the optimization loop of GES. As a result, we were motivated to apply the same concept to CMA-ES by normalizing the weight vector before we aggregate the predictions of the base models. Thus, changing the loss associated with a weight vector proposed by CMA-ES outside of its optimization process. In contrast, our application in Section <ref> normalized the aggregated predictions for ROC AUC using softmax – we normalized after aggregation. Now, however, we propose to normalize before aggregation as in GES. In turn, this also changes the optimization process of CMA-ES, e.g., the parameter update, because a weight vector might have a different loss depending on normalizing before or after aggregation. §.§ Normalization Methods We propose three distinct normalization methods. Two of the methods we propose are based on the concept of GES such that the last proposed method tries to simulate Equation <ref> fully. 1) Softmax (CMA-ES-Softmax) Initially, we propose a simple alternative to our previous usage of CMA-ES by moving the (non-linear) softmax before the aggregation. That is, we normalize the weight vector W by taking its softmax. That is, for a weight w_i ∈ W, we calculate: [eq/softmax]w_i^s = exp(w_i)/∑^m_j=1 exp(w_j), resulting in W^s = (w_1^s,..., w_m^s) with ∑_j=1^m w^s_j = 1 and 0 ≤ w_j ≤ 1 for w_j ∈ W^s. 2) Softmax & Implict GES Normalization (CMA-ES-ImplictGES) Next, we propose to re-normalize W^s with the aim of producing an equivalent to a pseudo-discrete weight vector W^pDisc; simulating GES's 𝒢 (see Equation <ref>). Therefore, we round each value of W^s to the nearest fraction with denominator N_hyp producing a rounding-discrete weight vector W^rDisc. Then, N_hyp represents the number of hypothetical iterations for a simulated 𝒢. We set N_hyp = 50, similar to GES. We produce W^rDisc = (w_0^rDisc, ..., w_m^rDisc) by multiplying each w_i^s with N_hyp and rounding each element to the nearest integer afterwards; rounding up for values larger than 0.5. Therefore, we first compute the integer vector R = (r_1, ..., r_m) using r_i = ⌊ w_i^s * N_hyp⌉. Note, R can be thought of as a vector of repetitions where r_i denotes how often a model has been repeated in a hypothetical list of repeated base models E_hyp. That is, E_hyp is connected to W^rDisc like an E to its W^pDisc. Hence, we can compute W^rDisc using R, paralleling Equation <ref>: W^rDisc = [r_i/∑_j=1^m r_j | r_i ∈ R ]. W^rDisc sums to 1, and each element is between 0 and 1. Interestingly, we found that this approach also implicitly trims base models, as the nearest fraction can be 0/N_hyp such that the method assigns zero weight to base models in these cases. 3) Softmax & Explicit GES Normalization (CMA-ES-ExplicitGES) Finally, we propose to explicitly trim base models and perfect the simulation of Equation <ref>. We can explicitly trim base models based on N_hyp. We found that a weight w_j^s is set to zero by rounding if w_j^s*N_hyp≤ 0.5. If we reformulate the inequality to w_j^s ≤ 0.5*1/N_hyp, we see that this parallels GES, where the number of iterations determines the minimal weight a model can be assigned, i.e., 1/N. Furthermore, we found that CMA-ES-ImplictGES does not simulate GES sufficiently. We observed that rounding may result in ∑_j=1^m r_j ≠ N_hyp. That is, the total number of repetitions in R did not match the number of simulated iterations nor the (hypothetical) length of E_hyp. R was supposed to relate to E_hyp for W^rDisc like an E to its W^pDisc. Yet for GES, it holds that |E| = N while |E_hyp| ≠ N_hyp can happen in CMA-ES-ImplictGES. Considering both, we implemented the third method, shown in Algorithm <ref>. First, we compute W^s and trim any base model smaller than 0.5/N_hyp (Line <ref>). If we set all weights to zero, we fall back to an unweighted average (Line <ref>). Second, we round to the nearest integer, producing R' (Line <ref>). Next, we set R” = R' and modify R” to achieve ∑_j=1^m r”_j = N_hyp. We want to keep the distribution of R” as close as possible to the distribution of R'. Hence, we keep the relative distances between the individual elements in R' and R” similar. If ∑_j=1^m r'_j > N_hyp, we decrement elements in R” by 1 until ∑_j=1^m r”_j = N_hyp (Line <ref>). We decrement in order from lowest to highest valued element in R', that is, lowest to highest weighted base model in the resulting weight vector. Thus, first trimming base models with only one repetition. Finally, if ∑_j=1^m r'_j - N_hyp is large enough, we decrement the most repeated elements. Note, due to rounding, we must decrement each element once in the worst case. If ∑_j=1^m r'_j < N_hyp, we have to increase the value of elements in R”. To keep the relative distances similar, we equally distributed N_hyp - ∑_j=1^m r'_j increments between all non-zero elements in R” (Line <ref>). Finally, the R” is transformed into a weight vector with Equation <ref>. §.§ Comparing Normalization Methods We use CMA-ES-ExplicitGES for the final evaluation below because it is the only approach that is in line with GES's concepts. Nevertheless, here, we provide an additional comparison of the three normalization methods on the same data as used in Section <ref>. We run CMA-ES, as described above, with the three different methods for normalization on the data from AutoGluon for ROC AUC. We ignore the threshold-dependent balanced accuracy because CMA-ES is not affected by overfitting for balanced accuracy. Besides normalization, the main difference to the application from Section <ref> is that we do not apply softmax after aggregation anymore when we apply normalization. First, a note regarding sparseness. On average, across all datasets for ROC AUC, ∼13.2 base models exist, see Appendix <ref> for each dataset's number. For comparison, we computed the average number of non-zero weighted base models for the ensemble methods, see Appendix <ref>. This shows that CMA-ES without normalization has an average ensemble size, that is, the number of non-zero weighted base models, of ∼12.9. In contrast, CMA-ES-ExplicitGES has an average ensemble size of ∼6.3, CMA-ES-ImplicitGES of ∼5.4. For context, GES has an average ensemble size of ∼5.8 Hence, we conclude that CMA-ES produces dense weight vectors. While our normalization approaches are able to produce sparse vectors like GES. Next, we repeat the statistical test performed in Section <ref> for all normalization methods, CMA-ES, and the SingleBest, see Figure <ref> in the Appendix <ref>. We observe that all normalization methods outperform CMA-ES and that CMA-ES-ExplicitGES ranks highest. Furthermore, the different normalization methods are not statistically significantly different from each other. Only CMA-ES-ExplicitGES is significantly different from CMA-ES for multi-class. § OVERALL EXPERIMENTS In our final evaluation, we mirror the experiments from Section <ref> and compare the SingleBest, GES, CMA-ES, and CMA-ES with normalization (CMA-ES-ExplicitGES). We additionally include stacking in our comparison because it is part of Auto-Sklearn’s insight and used by H2O AutoML. For our implementation of stacking <cit.>, we use a default Logistic Regression classifier from scikit-learn <cit.> as a stacking model. We adjusted the code such that we terminate after m*50 evaluations to make the method comparable to GES and CMA-ES. For CMA-ES we stick to the implementation and default hyperparameters as described in Section <ref>. Besides the statistical tests, we also inspect the difference in the distributions of relative performance. Therefore, we follow the AutoML benchmark <cit.> and use normalized improvement to make the scores of methods comparable across different datasets. We scale the scores for a dataset such that -1 is equal to the score of a baseline, here the SingleBest, and 0 is equal to the score of the best method on the dataset. We employ a variant of normalized improvement as we ran into an edge case where the normalized improvement is undefined if the difference between the single best model and the best method is 0. In our variant, for this edge case, we set everything as good as the SingleBest to -1 and penalize all methods worse than the baseline with -10; following a penalization approach like PAR10 from Algorithm Selection <cit.>. We provide a formalized definition of normalized improvement in Appendix <ref>. § OVERALL RESULTS Figure <ref> shows the results of the statistical tests and mean rankings for the compared methods. The distribution of the relative performance is shown in Figure <ref>. Additionally, the performance per dataset is provided in Appendix <ref>. Overall Predictive Performance All post hoc ensembling methods always outperform the SingleBest on average, although not always statistically significant – see Figure <ref>. Yet, post hoc ensembling can overfit and become worse for specific datasets, as indicated by the black dots left of the red bar and the number of outliers in square brackets in Figure <ref>. For balanced accuracy, we observe that CMA-ES significantly beats all methods. Likewise, we observe that stacking and CMA-ES-ExplicitGES outperform GES by a small non-significant margin. For ROC AUC, we see that GES and CMA-ES-ExplicitGES outperform all other methods and differ only by a small non-significant margin. Both are also significantly different from the SingleBest; unlike stacking. Moreover, Figure <ref> shows us that CMA-ES-ExplicitGES has similar or better relative performance distributions than GES (see the medians and whiskers). Normalization to Combat Overfitting See Table <ref> to inspect overfitting for CMA-ES-ExplictGES. See Appendix <ref> for an overview of the rank change for all compared methods. In general, CMA-ES-ExplictGES's mean rank, compared to GES and the SingleBest, changes only minimally between validation and test data. Showing us that it overfits less than CMA-ES (compare to Table <ref>, Section <ref>). As before, the SingleBest is always the worst-ranked method. GES is worse than CMA-ES-ExplictGES on test data for all but ROC AUC Binary. On validation data, however, GES is better than CMA-ES-ExplictGES in all cases except for ROC AUC multi-class, where it is tied. Now, GES is more affected by overfitting than CMA-ES with normalization. No Free Lunch CMA-ES-ExplictGES for balanced accuracy ranks worse than CMA-ES but better than GES. In contrast, CMA-ES-ExplictGES ranks better than CMA-ES for ROC AUC. A decrease in performance for balanced accuracy was to be expected as the normalization method constrained the solutions of CMA-ES to be sparse and pseudo-discrete to combat overfitting, but CMA-ES did not overfit for balanced accuracy. Moreover, it indicates that satisfying these properties of GES for balanced accuracy is suboptimal. Hence, our results also indicate the need to select the best method per task and metric instead of always using the same method; in line with the no free lunch theorem. Likewise, the drastic differences in performance of the methods between metrics suggest that the optimization landscapes, and the impact of overfitting on them, differ drastically. § CONCLUSION Greedy ensemble selection (GES) <cit.> is often used for post hoc ensembling in AutoML; likely as a result of Auto-Sklearn 1's <cit.> reported insight that GES is superior to potential alternatives, like gradient-free numerical optimization, for post hoc ensembling. In this paper, we have shown that Auto-Sklearn's insight w.r.t. overfitting depends on the metric when tested for an AutoML system with higher-quality validation data than Auto-Sklearn, e.g., AutoGluon <cit.>. Indeed, for the metric ROC AUC, GES does not overfit meaningfully, while gradient-free numerical optimization, e.g., CMA-ES <cit.>, overfits drastically. However, for balanced accuracy, CMA-ES does not overfit and outperforms GES. As a direct consequence, we were motivated to find a method that combats the overfitting of CMA-ES for ROC AUC. Therefore, we proposed a novel normalization method, is inspired by GES, which successfully salvages CMA-ES for ROC AUC by making CMA-ES perform better than or similar to GES. The CPU nodes of the OMNI cluster of the University of Siegen (North Rhine-Westphalia, Germany) were used for all experiments presented in this work. § SUBMISSION CHECKLIST * For all authors… * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? We state in the abstract and introduction that we compare GES to CMA-ES w.r.t. overfitting. Moreover, we claim to look at normalization to avoid overfitting. This is exactly what we do in the paper. * Did you describe the limitations of your work? In the Appendix, see <ref>. * Did you discuss any potential negative societal impacts of your work? In the Appendix, see <ref>. * Have you read the ethics author's and review guidelines and ensured that your paper conforms to them? <https://2023.automl.cc/ethics/> We believe our paper confirms to them. * If you are including theoretical results… * Did you state the full set of assumptions of all theoretical results? We included no theoretical results; only theoretical arguments for our proposed normalization method. * Did you include complete proofs of all theoretical results? We included no theoretical results; only theoretical arguments for our proposed normalization method. * If you ran experiments… * Did you include the code, data, and instructions needed to reproduce the main experimental results, including all requirements (e.g., with explicit version), an instructive with installation, and execution commands (either in the supplemental material or as a url)? See our code repository (Appendix <ref>) for all details. * Did you include the raw results of running the given instructions on the given code and data? See our code repository. * Did you include scripts and commands that can be used to generate the figures and tables in your paper based on the raw results of the code, data, and instructions given? See our code repository. * Did you ensure sufficient code quality such that your code can be safely executed and the code is properly documented? We believe that our code quality and documentation are sufficient. * Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixed hyperparameter settings, and how they were chosen)? See the Section <ref> and <ref>. Additionally, see our code repository. * Did you ensure that you compared different methods (including your own) exactly on the same benchmarks, including the same datasets, search space, code for training and hyperparameters for that code? We ran all methods on the same data. * Did you run ablation studies to assess the impact of different components of your approach? We compared different normalization approaches, see Section <ref>. * Did you use the same evaluation protocol for the methods being compared? We ran all methods on the same data with the same evaluation protocol and code. * Did you compare performance over time? We compared performance for a specific point in time (after 50 iterations of GES, i.e., after m*50 function evaluations of L). Performance over time was out of scope for our experiments. * Did you perform multiple runs of your experiments and report random seeds? Yes, we used 10-fold cross-validation for all our runs. The used random seeds can be found in our code. * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? We took the average over the 10 folds as a score following previous work and have not reported variance across folds. * Did you use tabular or surrogate benchmarks for in-depth evaluations? Such benchmarks were not available for our use case. * Did you include the total amount of compute and the type of resources used (e.g., type of gpus, internal cluster, or cloud provider)? See Section <ref>. * Did you report how you tuned hyperparameters, and what time and resources this required (if they were not automatically tuned by your AutoML method, e.g. in a nas approach; and also hyperparameters of your own method)? We did not tune hyperparameters. We used a default application of CMA-ES and introduced no meaningful new hyperparameters with our approaches that would require tuning. * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… * If your work uses existing assets, did you cite the creators? See Section <ref> and Appendix <ref>. * Did you mention the license of the assets? See Appendix <ref>. * Did you include any new assets either in the supplemental material or as a url? See our code repository. * Did you discuss whether and how consent was obtained from people whose data you're using/curating? We are only using publicly available data that was used before in benchmarks. * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? We believe that the data we are using does not contain personally identifiable information or offensive content. * If you used crowdsourcing or conducted research with human subjects… * Did you include the full text of instructions given to participants and screenshots, if applicable? We did not use crowdsourcing or conducted research with human subjects. * Did you describe any potential participant risks, with links to Institutional Review Board (irb) approvals, if applicable? We did not use crowdsourcing or conducted research with human subjects. * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? We did not use crowdsourcing or conducted research with human subjects. § LIMITATIONS We note that our work is limited with respect to the following points: 1) we did not explore variations (w.r.t. hyperparameters or implementation) of CMA-ES in our work; 2) we considered overfitting with respect to mean rank change between validation and test data, but did not consider other concepts of overfitting; 3) we only looked at normalization to combat overfitting for CMA-ES and were not able to compare normalization to using constraints during optimization; 4) we only provided a high-level theoretical analysis of GES and were not able to provide more fundamental work or proofs ; and 5) we only evaluated our approach for AutoGluon, one AutoML system with its specific approach to AutoML. § BROADER IMPACT STATEMENT After careful reflection, we determine that this work presents almost no notable or new negative impacts to society or the environment that are not already present for existing state-of-the-art AutoML systems. This follows from our work being mostly domain-independent, abstract, and methodical. We only proposed to replace one component of an AutoML system such that the predictive performance improves. Nevertheless, we would like to remark that our work might prompt others to use a default application of CMA-ES instead of GES for a metric like balanced accuracy. This might have a negative impact on the environment because this would likely increase the inference time and size of the final ensemble proposed by AutoML systems. In contrast – as a trade-off – we see the positive impact that higher predictive performance with CMA-ES could better support decisions made with AutoML systems. Moreover, we believe that our work might help to understand GES, the currently most used method, better; such that its performance and behaviour becomes more explainable. § USED ASSETS: ESSENTIAL PYTHON FRAMEWORKS FOR THE IMPLEMENTATION AND EXPERIMENTS The following frameworks were essential for our implementation and experiments: * AutoGluon <cit.>, Version: 0.6.2, Apache-2.0 License; We used AutoGluon to generate base models for post hoc ensembling. * pycma <cit.>, Version 3.2.2, BSD 3-Clause License; We used pycma for CMA-ES. * Assembled <cit.>, Version 0.0.4, MIT License; We used Assembled to store the base models generated with AutoGluon and to run our ensemble-related experiments. § DOIS FOR DATA AND CODE The following assets were newly created as part of our experiments: * The code for our experiments: <https://doi.org/10.6084/m9.figshare.23609226>. * The prediction data of base models collected by running AutoGluon on the classification datasets from the AutoML benchmark: <https://doi.org/10.6084/m9.figshare.23609361>. § DATA OVERVIEW See Table <ref> for an overview of the used datasets and their characteristics. Additionally, the table shows the mean number of base models and the mean number of distinct algorithms generated by AutoGluon for the dataset for each metric (mean over the 10 folds of a dataset). § OVERVIEW OF RANK CHANGE FROM VALIDATION TO TEST DATA This section provides an overview of the rank change from validation to test data for the compared methods to inspect overfitting. Table <ref> gives the overview for the comparison made in Section <ref>. Table <ref> gives the overview for the comparison made in Section <ref>. §.§ Supplements for Section <ref> §.§ Supplements for Section <ref> § COMPARISON OF NORMALIZATION METHODS See Figure <ref> for a comparison of the three proposed normalization methods following the experiments described in Section <ref>. The difference between the presented methods shows a small ablation study of our approaches w.r.t. satisfying the properties of GES, pseudo-discrete and sparse (specified in Section <ref>). CMA-ES and CMA-ES-Softmax are versions without either property; CMA-ES-ImplicitGES satisfies only sparseness; and CMA-ES-ExplicitGES satisfies both properties. Only the method that satisfies both properties, CMA-ES-ExplicitGES, is significantly different from CMA-ES for multi-class and always has the best mean rank. To analyze the effect of trimming base models on the size of the ensemble, we show the average ensemble size in Table <ref>. § SUPPLEMENTS FOR EXPERIMENTS FOLLOWING THE AUTOML BENCHMARK <CIT.> §.§ Statistical Test with Critical Difference Plots Following the AutoML benchmark <cit.>, we perform a statistical test using a Friedman test with a Nemenyi post hoc test (α = 0.05). We implemented the tests re-using code from Autorank <cit.>. We first calculate the mean rank of each method for each collection of datasets, i.e., the subset of datasets for binary or multi-class classification for both metrics. Then, we use the Friedman test as an omnibus test to try to reject the null hypothesis that there is no difference between the methods. Only if the Friedman test is significant and rejects the null hypothesis, we perform a Nemenyi post hoc test. The test calculates a critical difference (CD). Finally, we determine if the difference between methods is significant by verifying that their difference in mean rank is greater than the CD. Otherwise, the difference is not significant. We show the results of the Nemenyi post hoc test using CD plots, whereby a horizontal bar connects methods that are not significantly different. §.§ Normalized Improvement Our implementation of normalized improvement follows the AutoML benchmark <cit.>. That is, we scale the scores for a dataset such that -1 is equal to the score of the single best model, and 0 is equal to the score of the best method on the dataset. Formally, we normalise the score s_D of a method for a dataset D using: s_D - s_D^b/s_D^* - s_D^b - 1, with the score of the baseline s_D^b and the best-observed score for the dataset s_D^*. We assume that higher scores are always better. We extend this definition for the edge cases where no method is better than the baseline, i.e., s_D^* - s_D^b = 0. We suppose that this edge case never happened in the AutoML benchmark. Otherwise, their definition and implementation would have crashed. In our setting, such an edge case can happen due to overfitting such that the ensemble methods becomes worse than the single best model. If the edge case happens, we set the score of all methods worse than the baseline to -10, following a penalization-like approach (e.g., PAR10 from Algorithm Selection <cit.>). Methods for which s_D - s_D^b = 0 holds are assigned a score of -1. § OVERVIEW OF PERFORMANCE PER DATASET Here we provide the mean and standard deviation over all folds per dataset. The different combinations of metric and classification tasks are split into separate tables, see Tables <ref>,<ref>, <ref>, and <ref>.
http://arxiv.org/abs/2307.00618v1
20230702171817
Bounce: a Reliable Bayesian Optimization Algorithm for Combinatorial and Mixed Spaces
[ "Leonard Papenmeier", "Luigi Nardi", "Matthias Poloczek" ]
cs.LG
[ "cs.LG" ]
Polarization of recoil photon in non-linear Compton process. A. I. Titov August 1, 2023 ================================================================= Impactful applications such as materials discovery, hardware design, neural architecture search, or portfolio optimization require optimizing high-dimensional black-box functions with mixed and combinatorial input spaces. While Bayesian optimization has recently made significant progress in solving such problems, an in-depth analysis reveals that the current state-of-the-art methods are not reliable. Their performances degrade substantially when the unknown optima of the function do not have a certain structure. To fill the need for a reliable algorithm for combinatorial and mixed spaces, this paper proposes that relies on a novel map of various variable types into nested embeddings of increasing dimensionality. Comprehensive experiments show that reliably achieves and often even improves upon state-of-the-art performance on a variety of high-dimensional problems. § INTRODUCTION BO has become a `go-to' method for optimizing expensive-to-evaluate black-box functions <cit.> that have numerous important applications, including hyperparameter optimization for machine learning models <cit.>, portfolio optimization in finance <cit.>, chemical engineering and materials discovery <cit.>, hardware design <cit.>, or scheduling problems <cit.>. These problems are challenging for a variety of reasons. Most importantly, they may expose hundreds of tunable parameters that allow for granular optimization of the underlying design but also lead to high-dimensional optimization tasks and the `curses of dimensionality' <cit.>. Typical examples are drug design <cit.> and combinatorial testing <cit.>. Moreover, real-world applications often have categorical or ordinal tunable parameters, in addition to the bounded real-valued parameters that BO has traditionally focused on <cit.>. Recent efforts have thus extended BO to combinatorial and mixed spaces. of <cit.> uses TR to accommodate high dimensionality, building upon prior work of <cit.> for continuous spaces. of <cit.> constructs a surrogate model based on a combinatorial graph representation of the function. Recently, <cit.> presented that employs a novel type of dictionary-based embedding and showed that it outperforms the prior work. However, the causes for 's excellent performance are not yet well-understood and require a closer examination. Moreover, the ability of methods for mixed spaces to scale to higher dimensionalities trails behind BO for continuous domains. Recently, nested embeddings <cit.> have been shown to handle a thousand input dimensions, thus outperforming vanilla TR-based approaches and raising the question of whether similar performance gains are feasible for combinatorial domains. In this work, we assess and improve upon the state-of-the-art in combinatorial BO. In particular, we make the following contributions: * We conduct an in-depth analysis of two state-of-the-art algorithms for combinatorial BO,  <cit.> and  <cit.>. The analysis reveals that their performances often degrade considerably when the optima of the optimization problem do not exhibit a particular structure that is common for synthetic test problems. * We propose (Bayesian Optimization Using iNcreasingly high-dimensional Combinatorial and continuous Embeddings), a novel HDBO method that effectively optimizes over combinatorial, continuous, and mixed spaces. leverages parallel function evaluations efficiently and uses nested random embeddings to scale to high-dimensional problems. * We provide a comprehensive evaluation on a representative collection of combinatorial, continuous, and mixed-space benchmarks, which demonstrates that is on par with, or outperforms state-of-the-art methods. § BACKGROUND AND RELATED WORK Bayesian optimization. BO aims to find the global optimum x^*∈𝒳 of a black-box function f:𝒳→ℝ, where 𝒳 is the D-dimensional search space or input space. Throughout this paper, we consider minimization problems, i.e., we aim to find x^*∈𝒳 such that f(x^*)≤ f(x) for all x∈𝒳. The search space 𝒳 may contain variables of different types: continuous, categorical, and ordinal. We denote the number of continuous variables in 𝒳 by n_cont and the number of combinatorial variables by n_comb=n_cat+n_ord=D-n_ cont, where we denote the number of categorical variables by n_cat and the number of ordinal variables by n_ord. Combinatorial domains. Extending BO to combinatorial spaces is challenging, for example, because the acquisition function is only defined at discrete locations or the dimensionality of the space grows drastically when using one-hot encoding for categorical variables. Due to its numerous applications, combinatorial BO has received increased attention in recent years.  <cit.> handles the exponential explosion of combinations by only modeling lower-order interactions of combinatorial variables and imposing a sparse prior on the interaction terms.  <cit.> models each variable as a graph and uses the graph-Cartesian product to represent the search space. <cit.> build up on  <cit.> by establishing a closed-form expression for the diffusion kernel and proposing kernels tailored for mixed spaces. We revisit 's performance on categorical problems in Appendix <ref>.  <cit.> combines multi-armed bandits and BO to allow for optimization in mixed spaces. It uses two separate kernels for continuous and combinatorial variables and proposes a weighted average of a product and a sum kernel to model mixed spaces. As a general method to optimize the acquisition function with gradient-based methods, <cit.> recently proposed probabilistic reparametrization. High-dimensional continuous spaces. Subspace-based methods have primarily been used for continuous spaces. <cit.> proposed for HDBO in continuous spaces using Gaussian random projection matrices. suffers from distortions and projections outside the search domains that the corrections of <cit.> address. The algorithm of <cit.> avoids the need for corrections by using the embedding <cit.>. of <cit.> builds upon , learning suitable corrections of distortions.  <cit.> is a method that operates in the full-dimensional input space 𝒳, relying on TR to focus the search on promising regions of the search space. of <cit.> combines the trust region approach of with the random subspace idea of . uses a novel family of random nested subspaces that is shown to exhibit better theoretical guarantees than the embedding. While handled a 1000D problem, it only considers continuous problems and cannot leverage parallel function evaluations. Another line of recent approaches employs Monte-Carlo Tree Search (MCTS) to reduce the complexity of the problem. <cit.> use MCTS to learn a partitioning of the continuous search space to focus the search on promising regions in the search space. <cit.> use a similar approach but instead of learning promising regions in the search space, they assume an axis-aligned active subspace and use MCTS to select important variables. Linear embeddings and random linear embeddings <cit.> require little or no training data to construct the embedding but assume a linear subspace. Non-linear embeddings allow to learn more complex embeddings but often require more training data. <cit.> and <cit.> use variational autoencoders (VAEs) to learn a non-linear embedding of highly-structured input spaces. <cit.> also use a VAE to learn a non-linear subspace of a combinatorial search space. By using a re-weighting scheme that puts more emphasis on promising points in the search space, they tailor the embedding toward optimization problems. Combinatorial high-dimensional domains. Combining combinatorial and high-dimensional BO allows targeting many practical applications where the black-box problem is defined over combinatorial or mixed space of high-dimensionality.  <cit.> follows in using TR to focus the search on promising regions of the search space. For combinatorial variables, TR are modeled in terms of the Hamming distance. For mixed spaces, uses interleaved search and models continuous and categorical variables with two separate TR. <cit.> use a random projection matrix to optimize combinatorial problems in a continuous embedded subspace. When evaluating a point, their approach first projects the continuous candidate point to the high-dimensional search space and then rounds to the next feasible combinatorial solution. <cit.> propose two algorithms for permutation spaces which occur in problems such as compiler optimization <cit.> and pose a special challenge due to the superexponential explosion of solutions.  <cit.> proposes a novel type of embedding based on a dictionary of reference points in the search space. The representation of a new point z is obtained by computing the Hamming distance between z and each reference point a_i in the dictionary. The reference points in the dictionary change at each iteration of the algorithm. Reference points are sampled from the search space to cover a wide range of `sequencies', i.e., the number of changes from 0 to 1 (and vice versa) in the binary vector. The authors assume that the diverse random sampling procedure leads to 's remarkable performance in combinatorial spaces with up to 60 dimensions. In Section <ref>, we show that 's performance relies on an artificial structure of the optimizer x^* and that its performance degrades considerably when this structure is violated. § THE ALGORITHM To overcome the aforementioned challenges in HDBO for real-world applications, we propose , a new algorithm for continuous, combinatorial, and mixed spaces. uses a GP <cit.> surrogate in a lower-dimensional subspace, the target space, that is realized by partitioning input variables into `bins', the so-called target dimensions. only bins variables of the same type (categorical, binary, ordinal, and continuous). When selecting new points to evaluate, sets all input variables within the same bin to a single value. It thus operates in a subspace of lower dimensionality than the input space. In particular, it maximizes the acquisition function in a low-dimensional subspace. During the optimization, refines its subspace embedding by splitting up bins into smaller bins, allowing for a more granular optimization at the expense of higher dimensionality. Note that by splitting up bins, asserts that observations taken in earlier subspaces are contained in the current subspace; see <cit.> for details. Thus, operates in a series of nested subspaces. It further uses a novel TR management that allows it to leverage batch parallelism efficiently, improving over the single point acquisition of  <cit.>. To model the GP <cit.> in low-dimensional subspaces, leverages ' family of nested random embeddings <cit.>. In particular, employs the sparse count-sketch embedding <cit.> in which each input dimension is assigned to exactly one target dimension. When increasing the target dimensionality, creates b new bins for every existing bin and re-distributes the input dimensions that had previously been assigned to that bin across the now b+1 bins. allocates an individual evaluation budget m_i to the current target space 𝒳_i that is proportional to the dimensionality of 𝒳_i. When the budget for the current target space is depleted, will increase the dimension of the target space until it reaches the input space of dimensionality D. Let d_0 denote the dimensionality of the first target space, i.e., the random embedding that starts with. Then has to increase the target dimension ⌈log_b+1D/d_0 ⌉ =:k-times to reach the input dimensionality D. After calculating k, re-sets the split factor b such that the distance between the predicted final target dimensionality d_k=d_0· (b+1)^k and the input dimensionality D is minimized: b=⌊log_k (D/d_0)-1⌉ where ⌊ x ⌉ denotes the closest integer to x. This ensures that the predetermined evaluation budget for each subspace will be approximately proportional to its dimensionality. This is in contrast to  <cit.> that uses a constant split factor b and adjusts the initial target dimensionality d_0. The evaluation budget m_i for the i-th subspace 𝒳_i is m_i := ⌊b· m_D· d_i/d_0· (1-(b+1)^k+1)⌉, where m_D is the budget until D is reached and b is the maximum number of bins added per split. follows  <cit.> and  <cit.> in using TR to efficiently optimize over target spaces of high dimensionality. TR allow focusing on promising regions of the search space by restricting the next points to evaluate to a region centered on the current best function value <cit.>. TR-based methods usually expand their TR if they find better points and conversely shrink it if they fail to make progress. If the TR falls below the threshold given by the base length, the methods restart with a fresh TR elsewhere.  <cit.> uses different base lengths for combinatorial and continuous variables. For combinatorial variables, the distance to the currently best function value is defined in terms of the Hamming distance, and the base length is an integer. For continuous variables, defines the base length in terms of the l_2 distance, i.e., a real number. Following , has separate base lengths L_ min^ cont and L_ min^ comb for continuous and combinatorial variables but does not fix the TR factor by which TR are increased or decreased upon successes or failures. Instead, the TR factor is adjusted dynamically so that the evaluation budget m_i for the current target space 𝒳_i is adhered to. In Section <ref>, we show that this design is crucial to enable batch parallelism. To harvest the sample efficiency of a low-dimensional target space, we would like to combine categorical variables into a single bin, even if they vary in the number of categories. This is not straightforward. For example, note that the popular one-hot encoding of categorical variables would give rise to multiple binary input dimensions, which would not be compatible with the above strategy of binning variables to form nested subspaces. overcomes these obstacles and allows variables of the same type to share a representation in the target space. We provide the details in Sect. <ref>. For the GP model, we use the kernel <cit.>. In particular, we model the continuous and combinatorial variables with two separate Matérn-5/2 kernels where we use ARD for the continuous variables and share one length scale for all combinatorial variables. Following <cit.>, we use a mixture of the sum and the product kernel: k(x,x')=λ k_ cmb(x_ cmb, x'_ cmb)k_ cnt(x_ cnt, x'_ cnt) + (1-λ)(k_ cmb(x_ cmb, x'_ cmb)+k_ cnt(x_ cnt, x'_ cnt)), where x_ cnt and x_ cmb are the continuous and combinatorial variables in x, respectively, and λ is between 0 and 1. The trade-off parameter λ is learned jointly with the other hyperparameters during the likelihood maximization. Algorithm <ref> gives a high-level overview of . In Appendix <ref>, we prove that converges to the global optimum under mild assumptions. We now explain the different components of in detail. §.§ The subspace embedding of mixed spaces supports mixed spaces of four types of input variables: categorical, ordinal, binary, and continuous variables. We discuss binary and categorical variables separately because we model them differently. The proposed embedding maps only variables of the same type to the same `bin', i.e., to a single target dimension of the embedding. Target dimensions are homogeneous in this regard. Note that the number of target dimensions of each type is implied by the current bin size of the embedding that may grow during the execution. The proposed embedding can handle categorical or ordinal input variables that differ in the number of discrete values they can take. Continuous variables. As common in BO, we suppose that the continuous variables take values in a bounded interval and thus are normalized to [-1, 1]. The embedding of continuous variables, i.e., input dimensions, follows  <cit.>: each input dimension D_i is associated with a random sign s_i ∈{-1, +1} and one or multiple input dimensions can be mapped to the same target dimension of the low-dimensional embedded subspace. Recall that works on the low-dimensional subspace and thus decides an assignment v_j for every target dimension d_j of the embedding. Then all input variables mapped to this particular target dimensions are set to this value v_j. Binary variables. Binary dimensions are represented by values -1 and +1. Each input dimension D_i is associated with a random sign s_i ∈{-1, +1}, and the subspace embedding may map one or more input dimensions to the same target dimension. While the embedding for binary and continuous dimensions is similar, handles binary dimensions differently when optimizing the acquisition function. Categorical variables. uses a one-hot encoding for categorical variables and combines them using the same categorical target dimension for dimensions of possibly different cardinalities. Suppose that the categorical variables v_1, …, v_ℓ with cardinalities c_1,…,c_ℓ are mapped to the same bin that is associated with the target dimension d_j of the subspace embedding. Then d_j is of categorical type and has max{c_i | 1≤ i ≤ℓ} =: c_ max distinct categories (admissible values), i.e., its cardinality is the maximum cardinality of the variables mapped to it. Suppose that assigns the label k ∈{1,…,c_ max} to the categorical target dimension d_j. We transform this label to a categorical assignment to each input variable v_1, …, v_ℓ, setting v_i = ⌈ k· (c_i/c_ max) ⌉. Recall that may split up bins, target dimensions, to increase the dimensionality of its subspace embedding. In such an event, every derived bin inherits the cardinality of the parent bin. This allows us to retain any observations the algorithm has taken up to this point. Analogously to the random sign for binary variables, we randomly shuffle the categories before the embedding. This reduces the risk of being biased towards a specific structure of the optimizer (see Appendix <ref>). Ordinal variables. The embedding of ordinal variables follows categorical variables. Suppose that ℓ ordinal variables v_1,…,v_ℓ are mapped to the same bin associated with the target dimension d_j of the subspace embedding. Let c_i ≥ 2 be the number of discrete values the input v_i may take. Then d_j has max{c_i | 1≤ i ≤ℓ} =: c_ max discrete values 𝒟_j := {1,2,…,c_ max}. Suppose that assigns the value k ∈𝒟_j to the target dimension d_j. We need to transform the value k to a feasible value for each of the ℓ ordinal input variables that are mapped to d_j. Thus, we set the input variable v_i := ⌈ k· (c_i/c_ max)⌉. For the sake of simplicity, we suppose here that the ordinal variable v_i has range {1,2,…,c_i}. §.§ Maximization of the acquisition function We use EI <cit.> for batches of size B=1 and qEI <cit.> for larger batches. We optimize the EI using gradient-based methods for continuous problems and local search for combinatorial problems. We interleave gradient-based optimization and local search for functions defined over a mixed space; see Appendix <ref> for details. §.§ Batch parallelism We allow to efficiently evaluate batches of points in parallel by using a scalable TR management strategy and qEI <cit.> as the acquisition function for batches of size B>1. To re-evaluate qEI on the same set of posterior samples, we fix the seed for each batch element throughout the interleaved optimization of the acquisition function. When starts with a fresh TR, we sample n_ init initial points uniformly at random to initialize the GP. The TR management strategy of differs from previous strategies <cit.> in that it uses a dynamic factor to determine the TR base length. Recall that shrinks the TR if it fails to make progress and starts a fresh TR if the TR falls below the threshold given by the base length. If one employed the strategies of  <cit.>,  <cit.>, or  <cit.> for larger batch sizes B and 's nested subspaces, then they would spend a large part of the evaluation budget in early target spaces. For example, suppose a continuous problem, the common values for the initial, minimum, and maximum TR base length, and the constant shrinkage factor of <cit.>. Then such a method has to shrink the TR base length at least seven times (i.e., evaluate f 7B-times) before it would increase the dimensionality of the target space. Thus, the method would risk depleting its budget before reaching a target space suitable for the problem. On the other hand, we will see that chooses an evaluation budget that is lower in low-dimensional target spaces. It uses only 3, 12, and 47 samples for the first three target spaces of a 1000-dimensional problem with an evaluation budget of 1000. 's strategy permits flexible TR shrinkage factors and base lengths, allowing TR base lengths to vary within the range [L_min, L_max]. Suppose that has evaluated j batches of B points each since it last increased the dimensionality of the target space, and let L_j denote the current TR base length. Observe that hence m_i - jB evaluations remain for 𝒳_i. Then sets the TR base length L_j+1 := λ_j^-B L_j if the evaluation of the batch gives a new best point whose objective value improves upon the incumbent by at least ε. We call this a `success'. Otherwise, observes a `failure' and sets L_j+1 := λ_j^+B L_j. The rationale of this rule is that if at iteration j, we apply this factor (m_i-jB)-times, which is the remaining number of batches in the current subspace 𝒳_i, then the last batch of the i-th target space 𝒳_i will have the minimum TR base length. If the TR is expanded upon a `success', we need to adjust λ_j not to use more than the allocated number of function evaluations in a target space. At each iteration, we therefore set the TR base length by λ_j = (L_ min/L_j)^1/(m_i-jB). Note that λ_j remains unchanged under this rule unless the TR expanded in the previous iteration. § EXPERIMENTAL EVALUATION We evaluate empirically on various benchmarks whose inputs are combinatorial, continuous, or mixed spaces. The evaluation comprises the state-of-the-art algorithms  <cit.>,  <cit.>, and  <cit.>, using code provided by the authors. We also report <cit.> as a baseline. The experimental setup. We initialize every algorithm with five initial points. The plots show the performances of all algorithms averaged over 50 repetitions except , which has 20 repetitions due to resource constraints caused by its high memory demand. The shaded regions give the standard error of the mean. We use common random seeds for all algorithms and the randomized versions of the benchmark functions. We run all methods for 200 function evaluations unless stated otherwise. The benchmarks. The evaluation uses seven established benchmarks <cit.>: 53D , 50D , 125D  <cit.>, 60D  <cit.>, 25D , 53D , and 25D  <cit.>. Due to space constraints, we moved the results for the , , and benchmarks to Appendix <ref>. For each benchmark, we report results for the originally published formulation and for a modification where we move the optimal point to a random location. The randomization procedure is fixed for each benchmark for all algorithms and repetitions. For binary problems, we flip each input variable independently with probability 0.5. For categorical problems, we randomly permute the order of the categories. We motivate this randomization in Section <ref>. §.§ 50D Low-Autocorrelation Binary Sequences () has n=50 binary dimensions. It has important applications in communications engineering and mathematics; see <cit.> for details. is a hard combinatorial problem and currently solved via exhaustive search. The goal is to find a sequence x∈{-1, +1}^n with a maximum merit factor F(x)=n^2/2E(x), where E(x)=∑_k=1^n-1C_k^2(x) and C_k(x) = ∑_i=1^n-k x_i x_i+k for k=0,…, n-1 are the autocorrelations of x <cit.>. The performance plot for this algorithm (Figure <ref>) shows that outperforms all other algorithms on the benchmark's original and randomized versions. Notably, the gain of over the runner-up increases with the number of function evaluations. §.§ Industrial Maximum Satisfiability: 125D benchmark We evaluate and the other algorithms on the 125-dimensional benchmark, a real-world MaxSAT instance with many applications in materials science <cit.>. Unlike the benchmark (see Appendix <ref>), is not a crafted benchmark, and its optimum has no synthetic structure <cit.>. Figure <ref> shows the total weight of the unsatisfied clauses as a function of evaluations. We cannot plot regret curves since the optimum is unknown <cit.>. We observe that finds better solutions than all other algorithms. is the only algorithm for which we observe sensitivity to the location of the optimal assignment: for the published version of the benchmark, quickly jumps to a moderately good solution but subsequently fails to make further progress. §.§ 25D Categorical Pest Control is a more complex version of the benchmark and has 25 categorical variables with five categories each <cit.>. The task is to select one out of five actions {1,2,…,5} at each of 25 stations to minimize the objective function that combines total cost and a measure of the spread of the pest. We note the setting x=(5,5,…,5) achieves a good value of 12.57, while the best value found in our evaluation is 12.07 is x=(5,5,…,5,1) and thus has a Hamming distance of one. The random seed used in our experiments is zero. Figure <ref> summarizes the performances of the algorithms. is robust to the location of the global optimum and consistently performs as well and , and sometimes better. In particular, the performances of and depend on whether the optimum has a certain structure. We discuss it in detail in Appendix <ref>. §.§ – a 53D AutoML task In the benchmark, we optimize over a mixed space with 50 binary and 3 continuous parameters to tune an ε-SVR model <cit.>. The 50 binary parameters determine whether to include or exclude an input feature from the dataset. The 3 continuous parameters correspond to the regularization parameter C, the kernel width γ, and the ε parameter of the ε-SVR model <cit.>. Its root mean squared error on a held-out dataset gives the function value. Figure <ref> summarizes the performances of the algorithms. We observe that , , and achieve comparable solutions. performs slightly worse if the ordering of the categories is shuffled and slightly better if the optimal assignment to all binary variables is one. does not support continuous variables and thus was omitted. §.§ 's efficacy for batch acquisition We study the sample efficiency of when it selects a batch of B points in each iteration to evaluate in parallel. Figure <ref> shows the results for B=1,3,5,10, and 20, where was run for min(2000,200·B) function evaluations. We configure to reach the input dimensionality after 100 evaluations for B=1,3,5 and after 25B for B=10,20. We observe that leverages parallel function evaluations effectively: it obtains a comparable function value at a considerably smaller number of iterations, thus saving wall-clock time for applications with time-consuming function evaluations. We also studied batch acquisition for continuous problems and found that also provides significant speed-ups here. Due to space constraints, we deferred the discussion to Appendix <ref>. §.§ The sensitivity of and to the location of the optima The above empirical evaluation reveals that the performances of  <cit.> and  <cit.> are sensitive to the location of the optima. Both methods degrade on at least one benchmark when the optimum is moved to a randomly chosen point. This is particularly unexpected for categorical variables where moving the optimum to a random location is equivalent to shuffling the labels of the categories of each variable. Such a change of representation should not affect the performance of an algorithm. is more susceptible to the location of the optimizer than . The performance of degrades only on the categorical benchmark, whereas degrades on five out of seven benchmarks. Here 's performance drops even below 's. Looking closer, we observe that 's performance degradation is particularly large for synthetic benchmarks like and , where setting all variables to the same value is optimal. Figure <ref> summarizes the effects of moving the optimum on . Due to space constraints, we moved the details and a discussion of categorical variables to the appendix. Similarly, setting all binary variables of the benchmark to one produces a good objective value. It is not surprising, given that the all-one assignment corresponds to including all features previously selected for the benchmark because of their high importance. We prove in Appendix <ref> that adds a point that n Hamming distance to an all-zero or all-one solution, with a probability that increases with the dictionary size. <cit.> reported that 's performance 'tends to improve' with the size of the dictionary. Moreover, samples a new dictionary in each iteration, eventually increasing the chance of having such a point in its dictionary. Thus, we hypothesize that benefits from having a near-optimal solution in its dictionary. For , Figure <ref> shows that the performance on degrades substantially if the labels of the categories are shuffled. Then 's sample-efficiency becomes comparable to . § DISCUSSION BO in combinatorial spaces has many exciting and impactful applications. Its applicability to real-world problems, such as that defy a closed-form solution, makes it a valuable tool for practitioners. Our empirical evaluation reveals that state-of-the-art methods fail to provide good solutions reliably. In particular, it finds that and , which performed best in recent publications, are sensitive to the location of the optimizer. We identified design flaws in and an implementation bug in as the root causes of the performance degradations. The proposed algorithm is reliable for high-dimensional black-box optimization in combinatorial, continuous, and mixed spaces. The empirical evaluation demonstrates that reliably outperforms the state-of-the-art on a diverse set of problems. Using a novel TR management strategy, leverages parallel evaluations of the objective function to improve its performance. We anticipate headroom by tailoring the modeling of combinatorial objects, e.g., arising in the search for peptides or materials discovery <cit.>. Here it seems particularly interesting to incorporate prior belief on the importance of decision variables while maintaining the overall scalability. Moreover, extending the present work to black-box constraints <cit.>, multiple objectives, and multiple information sources <cit.> will considerably expand the use cases that it applies to. Societal impact. Bayesian optimization has recently gained wide-spread popularity for tasks in drug discovery <cit.>, chemical engineering <cit.>, materials discovery <cit.>, aerospace engineering <cit.>, robotics <cit.>, and many more. This highlights the Bayesian optimization community's progress toward providing a reliable `off-the-shelf optimizer.' However, this promise is not yet fulfilled for the newer domain of mixed-variable Bayesian optimization that allows optimization over hundreds of `tunable levers', some of which are discrete, while others are continuous. This domain is of particular relevance for the tasks above. 's ability to incorporate more such levers in the optimization significantly impacts the above practical applications, allowing for more granular control of a chemical reaction or a processing path, to give some examples. The empirical evaluation shows that the performance of state-of-the-art methods is highly sensitive to the location of the unknown global optima and often degenerates drastically, thus putting practitioners at risk. The proposed algorithm , however, achieves robust performance over a broad collection of tasks and thus will become a `goto' optimizer for practitioners in other fields. Therefore, we will open-source the code when the paper is accepted. plainnat § CONSISTENCY OF In this section, we prove the consistency of the algorithm. The proof is based on <cit.> and <cit.>. With the following definitions * (x_k)_k=1^∞ is a sequence of points of decreasing function values; * x^*∈_x∈𝒳 is a minimizer of f in 𝒳; and under the following assumptions: * D is finite; * f is observed without noise; * The range of f is bounded in 𝒳, i.e., ∃ C ∈ℝ_++ s.t. |f(x)|<C ∀x∈𝒳; * For at least one of the minimizers x_i^* the (partial) assignment corresponding to the continuous variables lies in a (continuous) region with positive measure; * One reached the input dimensionality D, the continuous elements of the initial points {x_cont_i}_n=1^n_init after each TR restart are chosen * uniformly at random for continuous variables; and * such that every realization of the combinatorial variables has positive probability; then the algorithm finds a global optimum with probability 1, as the number of samples N goes to ∞. The range of f is bounded per Assumption 3, and only considers a function evaluation a `success' if the improvement over the current best solution exceeds a certain constant threshold. can only have a finite number of `successful' evaluations because the range of f is bounded per Assumption 3. For the sake of a contradiction, we suppose that does not obtain an optimal solution as its number of function evaluations N →∞. Thus, there must be a sequence of failures, such that the TR in the current target space, i.e., the current subspace, will eventually reach its minimum base length. Recall that in such an event, increases the target dimension by splitting up the `bins', thus creating a subspace of (b+1)-times higher dimensionality. Then creates a new TR that again experiences a sequence of failures that lead to another split, and so on. This series of events repeats until the embedded subspace eventually equals the input space and thus has dimensionality D. See lines 12-16 in Algorithm <ref> in Sect. <ref>. Still supposing that does not find an optimum in the input space, there must be a sequence of failures such that the side length of the TR again falls below the set minimum base length, now forcing a restart of . Recall that at every restart, samples a fresh set of initial points uniformly at random from the input space; see line 18 in Algorithm <ref>. Therefore, with probability 1, a random sample will eventually be drawn from any subset 𝒴⊆𝒳 with positive Lebesgue measure (ν(𝒴)>0): 1-lim_k →∞ (1-μ(𝒴) )^k = 1, where μ is the uniform probability measure of the sampling distribution that employs for initial data points upon restart <cit.>. Let α=inf{t: ν[x∈𝒳 | f(x)<t] > 0} denote the essential infimum of f on 𝒳 with ν being the Lebesgue measure <cit.>. Following <cit.>, we define the optimality region, i.e., the set of points whose function value is larger by at most ϵ than the essential infimum: R_ϵ, M= {x∈𝒳 | f(x)<α+ϵ} with ϵ>0 and M<0. Because of Ass. 4, at least one optimal point lies in a region of positive measure that is continuous for the continuous variables. Therefore, we have that α=f(x^∗). Note that this is also the case if the domain of f only consists of combinatorial variables (Ass. 5). Then, R_ϵ, M={x∈𝒳 | f(x)<f(x^∗)+ϵ}. Let (x^⋆_k)_k=1^∞ denote the sequence of best points that discovers with x^⋆_k being the best point up to iteration k. This sequence satisfies <ref> by construction. Note that x^⋆_k ∈ R_ϵ, M implies that x^⋆_k'∈ R_ϵ, M for all k'≥ k+1 <cit.> because observations are noise-free. Then, ℙ [x^⋆_k ∈ R_ϵ, M ] = 1-ℙ [ x^⋆_k∈𝒳∖ R_ϵ,M ] ≥ 1- (1-μ(R_ϵ,M) )^k, and, 1 ≥lim_k →∞ℙ [x^⋆_k∈ R_ϵ,M ] ≥1-lim_k →∞ (1-μ(R_ϵ,M) )^k_=1, Eq. (<ref>) = 1, i.e., x^⋆_k eventually falls into the optimality region <cit.>. By letting ϵ→ 0, x^⋆_k converges to the global optimum with probability 1 as k→∞. § ADDITIONAL EXPERIMENTS We compare to the other algorithms on three additional benchmark problems: and  <cit.>. Moreover, we run two additional studies to further investigate the performance of . First, we run on a set of continuous problems from <cit.> to showcase the performance and scalability of on purely continuous problems. We then present a “low-sequency” version of to showcase how such a version can outperform its competitors on the original benchmarks by introducing a bias towards low-sequency solutions. §.§ and other algorithms on additional benchmarks §.§.§ The synthetic benchmark function is a 53-dimensional function with 50 binary and 3 continuous variables. <cit.> discretized 50 continuous variables of the orginal Ackley function, requiring these variables to be either zero or one. This benchmark was designed such that the optimal value of 0.0 is at the origin x=(0,…,0). Here, we perturb the optimal assignment of combinatorial variables by flipping each binary variable with probability 1/2. Figure <ref> summarizes the performances of the algorithms. outperforms all other algorithms and proves to be robust to the location of the optimum point. is a distanced runner-up. initially outperforms on the published benchmark version but falls behind later. §.§.§ Contamination control The benchmark models a supply chain with 25 stages <cit.>. At each stage, a binary decision is made whether to quarantine food that has not yet been contaminated. Each such intervention is costly, and the goal is to minimize the number of contaminated products and prevention cost <cit.>. Figure <ref> shows the performances of the algorithms. , , and all produce solutions of comparable objective value. and find better solutions than initially, but after about 100 function evaluations, the solutions obtained by the three algorithms are typically on par. §.§.§ The benchmark is a 60-dimensional, weighted instance of the Maximum Satisfiability (MaxSAT) problem. MaxSAT is a notoriously hard combinatorial problem that cannot be solved in polynomial time (unless ℙ = ℕℙ). The goal is to find a binary assignment to the variables that satisfies clauses of maximum total weight. For every i in {1,2,…,d}, this benchmark has one clause of the form x_i with a weight of 1 and 638 clauses of the form x_i ∨ x_j with a weight of 61. Following <cit.>, we normalize these weights to have zero mean and unit standard deviation. This normalization causes the one-variable clauses to have a negative weight, i.e., the function value improves if such a clause is not satisfied, which is atypical behavior for a MaxSAT problem. Since the clauses with two variables are satisfied for x_i=x_j=0 and the clauses with one variable of negative weights are never satisfied for x_i=0, the normalized benchmark version has a global optimum at x^*=(0,…,0) by construction. The problem's difficulty is finding an assignment for variables such that all two-variable clauses are satisfied and as many one-variable clauses as possible are not captured by normalized weights. Figure <ref> summarizes the performances of the algorithms. The general version that attains the global optimum for a randomly selected binary assignment is shown on the left. The special case where the global optimum is set to the all-zero assignment is shown on the right. We observe that requires the smallest number of samples to find an optimal assignment in general, followed by and . Only in the special case where the optimum is the all-zero assignment, ranks first, confirming the corresponding result in <cit.>. §.§ Parallel evaluations on continuous problems To showcase the performance and scalability of , we run it on a set of continuous problems from <cit.>. The 124-dimensional benchmark is a constrained vehicle optimization problem. We adopt the soft-constrained version from <cit.>. The 388-dimensional problem <cit.> concerns the classification performance with an SVR on the slice localization dataset. The 180-dimensional benchmark <cit.> is a sparse regression problem on a real-world dataset, and the 1000-dimensional benchmark optimizes over a synthetic dataset. The 500-dimensional and problems are versions of the 2- and 6-dimensional benchmark problems where additional dimensions with no effect on the function value were added. We set the number of function evaluations to max(2000,500B) for a batch size of B and configure such that it reaches the input dimensionality after 500 function evaluations. Figure <ref> shows the simple regret for the synthetic and problems, and the best function value obtained after a given number of batch evaluations for the remaining problems: , , , and . We observe that always benefits from more parallel function evaluations. The difference between smaller batch sizes, such as B=1 and B=3 or B=3 and B=10, is more remarkable than between larger batch sizes, like B=10 and B=20. Parallel function evaluations prove especially effective on and . Here, the optimization performance improves drastically. We conclude that a small number of parallel function evaluations already helps increase the optimization performance considerably. On the synthetic and problems, quickly converges to the global optimum. Here, we see that a larger number of parallel function evaluations also helps in converging to a better solution. §.§ Low-sequency version of We show how we can bias towards low-sequency solutions. We remove the random signs (for binary and continuous variables) and the random offsets (for categorical and ordinal variables) from the embedding. We conduct this study to show a) that can outperform on the unmodified versions of the benchmark problems if we introduce a similar bias towards low-sequency solutions and b) that the random signs empirically show to remove biases towards low-sequency solutions. However, we want to emphasize that the results of this section are not representative of the performance of on arbitrary real-world problems. Nevertheless, if one knows that the problem has a low-sequency structure, then can be configured to exploit this structure and outperform . Figure <ref> shows the results of the low-sequency version of on the original benchmarks from Section <ref>. We observe that outperforms and the other algorithms on the unmodified versions of the benchmark problems. This shows that can outperform on the unmodified version of the benchmarks if we introduce a similar bias towards low-sequency solutions. Figure <ref> shows the results of the low-sequency version of on the flipped benchmarks from Section <ref>. The low-sequency version of is robust towards the randomization of the optimal point. § IMPLEMENTATION DETAILS We implement in Python using the  <cit.> and  <cit.> libraries. We employ a Γ(1.5, 0.1) prior on the lengthscales of both kernels and a Γ(1.5, 0.5) prior on the signal variance. We further use a Γ(1.1, 0.1) prior on the noise variance. Motivated by <cit.> and <cit.>, we use an initial trust region baselength of 40 for the combinatorial variables, and 0.8 for the continuous variables. We maintain two separate TR shrinkage and expansion parameters (γ_cmb and γ_cnt) for the combinatorial and continuous variables, respectively such that each TR base length reaches its respective minimum of 1 and 2^-7 after a given number of function evaluations. When finds a better or worse solution, we increase or decrease both TR base lengths. We use the author's implementations for [<https://github.com/QUVA-Lab/combo>, unspecified license, last access: 2023-05-04], [<https://github.com/aryandeshwal/bodi>, no license provided, last access: 2023-05-04], and [<https://github.com/xingchenwan/casmo>, license, last access: 2023-05-04]. We use the same settings as the authors for and . For , we use the same settings as the authors for benchmarks reported in <cit.> and set the initial trust region base length to 40 otherwise. Due to its high-memory footprint, we ran on NVidia A100 80GB GPUs for 300 GPU/h. We ran on NVidia A40 GPUs for 2,000 GPU/h. We ran the remaining methods for 20,000 GPU/h on one core of Intel Xeon Gold 6130 CPUs with 60GB of memory. §.§ Optimization of the acquisition function We use different strategies to optimize the acquisition function depending on the type of variables present in a problem. Continuous problems. For purely continuous problems, we follow a similar approach as  <cit.>. In particular, we use the lengthscales of the GP posterior to shape the TR. We use gradient descent to optimize the acquisition function within the TR bounds with 10 random restarts and 512 raw samples. For a batch size of 1, we use analytical EI. For larger batch sizes, we use the implementation of qEI <cit.>. Binary problems. Similar to <cit.>, we use discrete TR centered on the current best solution. A discrete TR describes all solutions with a certain Hamming distance to the current best solution. We use a local search approach to optimize the acquisition function for all problems with a combinatorial search space of only binary variables: When starting the optimization, we first create a set of min(5000, max(2000, 200· d_i)) random solutions. The choice of the number of random solutions is based on  <cit.>. For each candidate, we first draw L_i indices uniformly at random from {1, …, d_i} without replacement, where L_i is the TR length at the i-th iteration. We then sample d_i values in {0,1} and set the candidate at the sampled indices to the sampled values. All other values are set to the values of the current best solution. Note that this construction ensures that each candidate solution lies in the TR bounds of the current best solution. We add all neighbors (i.e., points with a Hamming distance of 1) of the current best solution to the set of candidates. This is inspired by  <cit.>. We find the 20 candidates with the highest acquisition function value and use local search to optimize the acquisition function within the TR bounds: At each local search step, we create all direct neighbors that do not coincide with the current best solution or would violate the TR bounds. We then move the current best solution to the neighbor with the highest acquisition function value. We repeat this process until the acquisition function value does not increase anymore. Finally, we return the best solution found during local search. Categorical problems. We adopt the approach for binary problems, i.e., we first create a set of random solutions with the same size as for purely binary problems and start the local search on the 20 best initial candidates. We use one-hot encoding for categorical variables. Suppose the number of categorical variables of the problem is smaller or equal to the current TR length. In that case, we sample, for each candidate and each categorical variable, an index uniformly at random from {1,2,…, |v_i|} where |v_i| is the number of values of the i-th categorical variable. We then set the candidate at the sampled index to 1 and all other values to 0. If the number of categorical variables of the problem is larger than the current TR length L_i, we first sample L_i categorical variables uniformly at random from [d_i] without replacement. For each initial candidate and each sampled categorical variable, we sample an index uniformly at random, for which we set the categorical variable to 1 and all other values to 0. The values for the variables that were not sampled are set to the values of the current best solution. As for the binary case, we add all neighbors of the current best solution to the set of candidates, and we sample the 20 candidates with the highest acquisition function value. We then use hull climbing to optimize the acquisition function within the TR bounds while neighbors are created by changing the index of one categorical variable. Again, we repeat until convergence and return the best solution found during local search. Ordinal problems. The construction for ordinal problems is similar to the one for categorical problems. Suppose the number of ordinal variables of the problem is smaller or equal to the current TR length. In that case, we sample an ordinal value uniformly at random to set the ordinal variable for each candidate and each ordinal variable. Otherwise, we choose as many ordinal variables as each candidate's current TR length and sample an ordinal value uniformly at random to set the ordinal variable. We add all neighbors of the current best solution, all solutions where the distance to the current best solution is 1 for one ordinal variable, to the set of candidates. We then sample the 20 candidates with the highest acquisition function value and use local search to optimize the acquisition function within the TR bounds. In the local search, we increment or decrement the value of a single ordinal variable. Mixed problems. Mixed problems are effectively handled by treating every variable type separately. Again, we create a set of initial random solutions where the values for the different variable types are sampled according to the abovementioned approaches. This can lead to solutions outside the TR bounds. We remove these solutions and find the 20 best candidates only across the solutions within the TR bounds. When optimizing the acquisition function, we differentiate between continuous and combinatorial variables. We optimize the continuous variables by gradient descent with the same settings as purely continuous problems. When optimizing, we fix the values for the combinatorial values. We use local search to optimize the acquisition function for the combinatorial variables. In this step, we fix the values for the continuous variables and only optimize the combinatorial variables. We create the neighbors by creating neighbors within Hamming distance of 1 for each combinatorial variable type and then combining these neighbors. Again, we repeat local search until convergence. We do five interleaved steps, starting with the continuous variables and ending with the combinatorial variables. § ADDITIONAL ANALYSIS OF AND §.§ Analysis of Binary problems.  <cit.> is based on the idea of using a dictionary of reference points A=(a_1, …, a_m) to encode a candidate point z. In particular, the i-th entry of the m-dimensional embedding ϕ_A(z) is obtained by computing the Hamming-distance between z and a_i. Notably, the dimensionality of the embedding m is chosen to be 128 in their experiments <cit.>, which is larger than the dimensionality of the benchmark functions themselves. The dictionary elements a_i are chosen such that they represent a wide range of sequencies where the sequency of a binary string is defined as the number of times the string changes from 0 to 1 and vice versa. <cit.> propose two approaches to generate the dictionary elements: (i) by using binary wavelets, and (ii) by first drawing a Bernoulli parameter θ_i∼𝒰(0,1) for each i∈ [m] and then drawing a binary string a_i from the distribution ℬ(θ_i). The latter approach is their preferred method. We will now show that the probability of a point of sequence zero (i.e., a_i=0 or a_i=1) to be sampled is higher than for arbitrary points. We hypothesize that benefits from containing such a point with high probability if the optimal point is also of sequency zero (cf. Section <ref>). Since 's performance degrades when randomizing the optimal point, we further hypothesize that 's performance on problems with a zero- or low-sequency solution is not representative of problems with an arbitrary solution. <cit.> choose to have 128 dictionary elements. Given a Bernoulli parameter θ_i, the probability that the i-th dictionary point a_i is a point of sequency zero is given by θ_i^D+(1-θ_i)^D: ℙ(“zero sequency” | θ_i) = ∏_i=1^m θ_i^D+(1-θ_i)^D Then, since θ_i follows a uniform distribution, the overall probability for a point of zero sequency is given byℙ(“zero sequency”) =1-∏_i=1^m ∫_0^1 (1-θ_i^D - (1-θ_i)^D) p(θ_i)_=1 dθ_i_prob. of m times not zero sequency = 1-∏_i=1^m .(θ_i - θ_i^D+1/D+1 + (1-θ_i)^D+1/D+1)|_θ_i=1 = 1-(1-2/D+1)^m, i.e., the probability of at least one dictionary element being of sequency zero is 1-(1-2/D+1)^m. The probability of 's dictionary to contain a zero-sequency point increases with the number of dictionary elements m and decreases with the function dimensionality D (see Figure <ref>). For instance, for the 60-dimensional benchmark, the probability that at least one dictionary element is of sequency zero is 1-(1-2/60+1)^128≈ 0.986 (see Figure <ref>). Note that at least one point z^* has a probability of ≤ 1/2^d to be drawn. The probability of the dictionary containing that z^* is less than or equal to 1-(1-1/2^d)^m which is already less than 0.01 for d=14 and m=128. In Section <ref>, we have shown that randomizing the optimal point structure leads to performance degradation for . We hypothesize this is due to the reduced probability of the dictionary containing the optimal point after randomization. Categorical problems. We calculate the probability that contains a vector in its dictionary where all elements are the same. For categorical problems, first samples a vector θ from the τ_ max-simplex Δ^τ_ max for each vector a_i in the dictionary, with τ_ max being the maximum number of categories across all categorical variables of a problem. We assume that all variables have the same number of categories as is the case for the benchmarks in <cit.>. Let τ be the number of categories of the variables. For each element in a_i, draws a value from the categorical distribution with probabilities θ. While line 7 in Algorithm 5 in <cit.> might suggest that the elements in θ are shuffled for every element in a_i, we observe that θ remains fixed based on the implementation provided by the authors[See <https://github.com/aryandeshwal/bodi/blob/aa507d34a96407b647bf808375b5e162ddf10664/bodi/categorical_dictionary_kernel.py#L18>]. The random resampling of elements from θ is probably only used for benchmarks where the number of realizations differs between categorical variables. Then, for a fixed θ, the probability that all D elements in a_i for any i are equal to some fixed value t ∈{1,2,…,τ} is given by θ_t^d. The probability that, for any of the m dictionary elements, all D elements in a_i are equal to some fixed value t ∈{1,2,…,τ} is given by ℙ(“all one specific category”)= 1-∏_i=1^m ∫ (1-θ_t^D)p(θ_t) dθ_t. We note that θ follows a Dirichlet distribution with α=1 <cit.>. Then, θ_t is marginally Beta(1,τ-1)-distributed <cit.>. With that, Eq. (<ref>) becomes ℙ(“all one specific category”) = 1-∏_i=1^m 𝔼_θ_t ∼Beta(1,τ-1)[1-θ_t^D] = 1-∏_i=1^m 1-𝔼_θ_t ∼Beta(1,τ-1)[θ_t^D] now, by using the formula 𝔼[x^D] = ∏_r=0^D-1α+r/α+β+r for the D-th raw moment of a Beta(α,β) distribution <cit.> = 1- (1-∏_r=0^D-11+r/τ+r )^m = 1- (1-1/τ·2/τ+1·…·D/τ+D-1 )^m = 1 - ( 1- D! τ!/(τ+D-1)! )^m We discussed in Section <ref> that the benchmark obtains a good solution at x=5. One could assume that performs well on this benchmark because its dictionary will likely contain this point. However, we observe that the probability is effectively zero for τ=5, m=128, and D=25 (see Figure <ref>), which are the choices for the benchmark in <cit.>. This raises the question of (i) whether our hypothesis is wrong and (ii) what the reason for 's performance degradation on the benchmark is. We show that 's reference implementation differs from the algorithmic description in an important detail, causing to be considerably more likely to sample category five on (or the “last” category for arbitrary benchmarks) than any other category. Figure <ref> shows five histograms over the number of dictionary elements set to each category. The values on the x-axis give the number of elements in a 25-dimensional categorical vector being set to a specific category. One would expect that the histograms have a similar shape regardless of the category. However, for category 5, we see that more elements are set to this category than for the other categories: The probability of k elements being set to category 5 is almost twice as high as the probability of being set to another category for k≥ 3. In contrast, the probability that no element in the vector belongs to category 5 is virtually zero. This behavior is beneficial for the benchmark, which obtained the best value found during our experiments for x^*=(5,5,…,5,1) (see Section <ref>). While we see that the probability of each dictionary entry being set to category 5 is very low, we assume that we sample sufficiently many dictionary elements within a small Hamming distance to the optimizer such that 's GP can use this information to find the optimizer. The reason for oversampling of the last category lies in a rounding issue in sampling dictionary elements. In particular, for a given dictionary element a_i and a corresponding vector θ with |θ|=τ, for each i∈{1,2,…,τ-1}, <cit.> set ⌊ Dθ_i⌋ elements to category i. The remaining D-∑_i=1^τ-1⌊ Dθ_i⌋ elements are then set to category τ. This causes the last category to be overrepresented in the dictionary elements. For the choices of the benchmark, D=25 and τ=5, the first four categories had a probability of ≈ 0.1805. In contrast, the last one had a probability of ≈ 0.278 for 10^8 simulations[The 95% confidence intervals for categories 1–5 are (0.1799, 0.1807), (0.1802, 0.1810), (0.1803, 0.1811), (0.1801, 0.1809), (0.2775, 0.2783). Pairwise Wilcoxon signed-rank tests between categories 1–4 and category 5 gives p values of 0 (W≈ 4.7· 10^10 each).]. We assume that the higher probability of the last category is the reason for the performance difference between the modified and the unmodified version of the benchmark. §.§ on categorical problems On the categorical benchmark, we could observe a similar behavior for  <cit.> as for . In Figure <ref>, we show the histograms over the number of dictionary elements set to each category for both the modified and the unmodified version of the benchmark. We see that the first and the last categories on both versions of the benchmark are overrepresented. As discussed in Section <ref>, this benchmark attains its best value for x^*=(5,5,…,5,1). Therefore, it seems unexpected that sets so many entries to category 1 for the unmodified benchmark version. For the modified benchmark version, this is entirely unexpected as the optimizer has a random structure. Here, one would expect the histogram to be uniform. We argue that this behavior is at least partially caused by implementation error in the construction of the adjacency matrix and the Laplacian for categorical problems[<https://github.com/QUVA-Lab/COMBO/blob/9529eabb86365ce3a2ca44fff08291a09a853ca2/COMBO/experiments/test_functions/multiple_categorical.py#L137>, last access: 2023-04-26]. This error causes categorical variables to be modeled like ordinal variables. According to <cit.>, categorical variables are modeled as a complete graph (see Figure <ref>). However, we find the adjacency matrix for the first category of a categorical variable with five categories is constructed as [ 0 1 0 0 0; 1 0 1 0 0; 0 1 0 1 0; 0 0 1 0 1; 0 0 0 1 0 ], which is the adjacency matrix for a path graph with five vertices. We assume that the search space has boundaries due to treating categorical variables as ordinal variables. Due to the high dimensionality of the search space, visits the boundaries of the search space more often than the interior.
http://arxiv.org/abs/2307.02589v1
20230705183604
The Motion of Test Bodies around Kerr Black Holes
[ "Adrien Druart" ]
gr-qc
[ "gr-qc", "hep-th" ]
empty gobble dukeblue moumoulamouette 3450The Motion of Test Bodies around Kerr Black Holes 0.82pt Adrien Druart < g r a p h i c s > — PhD Thesis — gobble empty a moumoulamouette 3450The Motion of Test Bodies around Kerr Black Holes 0.82pt Thesis presented by Adrien DRUART in fulfilment of the requirements of the PhD Degree in Sciences ("Docteur en Sciences") Année académique 2022-2023 Supervisor: Geoffrey COMPÈRE (Université Libre de Bruxelles) Thesis jury : Riccardo ARGURIO (Université Libre de Bruxelles, Chair) Stéphane DETOURNAY (Université Libre de Bruxelles, Secretary) Tanja HINDERER (Utrecht University) Justin VINES (Max-Planck-Institute for Gravitational Physics) < g r a p h i c s > < g r a p h i c s > a empty a 0.8 a arabic plain gobble a empty Summary of the thesis This thesis aims to explore the properties of the motion of finite size, compact test bodies around a Kerr black hole in the small mass-ratio approximation. The small body is modelled as a perturbation of Kerr geometry, neglecting its gravitational back-reaction but including deviations from a purely geodesic motion by allowing it to possess a non-trivial internal structure. Such a body can be accurately described by a worldline endowed with a collection of multipole moments. Hereafter, we shall always consider the multipole expansion truncated at quadrupole order. Moreover, only spin-induced quadrupole moment will be taken into account, thus discarding the presence of any tidal-type deformation. For astrophysically realistic objects, this approximation is consistent with expanding the equations of motion up to second order in the body's spin magnitude. The text is structured as follows. The first part is devoted to an extended review of geodesic motion in Kerr spacetime, including Hamiltonian formulation and classification of timelike geodesics, with a particular emphasis put on near-horizon geodesics of high spin black holes. The second part introduces the equations of motion for extended test bodies in generic curved spacetime, also known as Mathisson-Papapetrou-Dixon (MPD) equations. They are derived from a generic action principle, and their physical significance and mathematical consistency is examined in details. The third part discusses conserved quantities for the MPD equations in Kerr spacetime, restricting to the aforementioned quadrupole approximation. The conservation is required to hold perturbatively in the test body's spin magnitude, and the related conserved quantities are build through the explicit resolution of the conservation constraint equations. Finally, the covariant Hamiltonian formulation of test body motion in curved spacetime is presented, and an Hamiltonian reproducing the spin-induced quadrupole MPD equations is derived. Two applications of the Hamiltonian formalism are subsequently discussed: (i) the integrability properties of MPD equations in Kerr and Schwarzschild spacetimes and (ii) the Hamilton-Jacobi formulation of MPD equations in Kerr spacetime. It is shown that the constants of motion obtained in the previous part directly arise while solving the Hamilton-Jacobi equation at first order in the spin magnitude. Some expectations regarding the computation at quadratic order close the discussion. a plain Roman contents empty tocchapterAcknowledgements CHAPTER: ACKNOWLEDGEMENTS My warmest thanks first go to my supervisor, Geoffrey Compère. His constant availability, the trust and the freedom he provided me all along these four years have been fundamental to the achievement of the present work. It would like to thank all the people I had the occasion to collaborate with during my PhD degree: Lorenzo Küchler, Justin Vines and Paul Ramond on the research side, Riccardo Argurio and Stéphane Detournay on the teaching side. A big thank you to the three wonderful secretaries of our group, who always managed to make things easy on the organizational level. Enfin, je tiens à remercier de tout mon coeur ma famille et mes amis, qu'ils soient de Liège, de Bruxelles, de Clerheid ou d'ailleurs. Vous tous qui m'avez entouré pendant ces années et sans qui je ne serais pas moi-même aujourd'hui, cette thèse est également la vôtre. Finally, thanks to Stéphane Detournay and Tanja Hinderer for pointing a couple of typos in the draft. tocpartIntroduction PART: *Introduction Introduction intro This thesis is devoted to the understanding of some theoretical aspects of the motion of small objects in the neighbourhood of supermassive blacks holes. By small objects, we have in mind either neutron stars or stellar mass black holes, which both share the property of not being (too much) tidally deformed even in a strong gravitational field, thus remaining compact at any time. Any bounded binary system composed of a “central” supermassive black hole and of such a stellar mass companion will dissipate energy and angular momentum though the emission of Gravitational Waves (GWs), finally leading to its coalescence. Our framework for modelling this kind of systems will be General Relativity (GR), and our main concern will be the understanding on how the internal structure of the small compact object (“the secondary”) will affect its motion around the much more massive black hole (“the primary”, or the “central” black hole). These deviations originate from the fact that the secondary is not a point-wise particle, but can be spinning, exhibit quadrupole or higher order multipole moments induced by either its proper rotation or its tidal deformability…All these effects will be collectively referred to as finite size effects. The zeroth order approximation to this description (corresponding to switch off all the finite size effects) amounts to study timelike geodesics in Kerr spacetime, which is the most generic GR solution accounting for an astrophysically realistic stationary black hole. The finite size induced corrections can then be studied as perturbations added on the top of this geodesic motion. This introduction aims to both motivate the present work from the current context of GW observations and to briefly set the background context in which the forthcoming discussion will take place. This thesis is divided into four parts, which are intended to be rather independent one to another, and more specific introductions will be provided at the beginning of each of them. We encourage the reader willing to acquire an overview of this text to first read the four part's introductions before turning to the chapters themselves. §.§.§ Observational motivations and large mass ratio binaries Since 2015 and the first detection of a binary black hole coalescence by the LIGO collaboration <cit.>, we have entered into a gravitational wave astronomy era, having potentially huge amounts of informations to bring to the scientific community, both on the theoretical and astrophysical sides. The main GW sources for the present detectors are the coalescences of systems of binary stellar mass black holes and/or neutron stars. Such systems exhibit a mass ratio which is typically not much greater than 1:10 <cit.>. Altogether, this yields the frequency of the GW signal emitted to be centered around one hundred Hertz, thus exactly lying in the LIGO/VIRGO/KAGRA terrestrial interferometers band. However, the upcoming space-based detectors such as the Laser Interferometer Space Antenna (LISA) mission <cit.> will be sensitive to GW signals centered around the millihertz, thus allowing the detection of GWs emitted by radically different types of astrophysical sources, see Figure <ref>. One of these new in-band phenomena corresponds to the capture of a compact, stellar mass object by a supermassive black hole. This phenomenon is known as an Extreme Mass Ratio Inspiral (or EMRI for short), provided that the mass ratio ϵ between the two bodies satisfies 10^-4≥ϵ≜μ/M. Here, M denotes the mass of the supermassive black hole while μ stands for the mass of the small compact object. The prospective observation of this type of events motivates the modelling of black hole binaries in the small mass ratio regime up to a high precision, since accurate parameter extraction from the LISA data would require to keep track of the orbital phase of the binary with a precision of about one radian over the whole in-band inspiral, which can last for a few hundred thousand cycles <cit.>. §.§.§ Extreme mass ratio inspirals in General Relativity This huge accuracy requirement has motivated a community-scaled effort aimed at providing precise models of EMRIs within the framework of GR in the last decades. Achieving this task requires to solve the general relativistic two body problem in the strong field/small mass ratio regime. As depicted in Figure <ref>, various methods can be used for tackling this problem, depending on both the separation and the mass ratio between the objects. For EMRIs, the most adapted method is to treat the secondary compact object as a perturbation moving in the curved geometry generated by the supermassive black hole, modelled as a rotating black hole within GR. This amounts to apply black hole perturbation theory over a Kerr background, which is also known as self-force theory in the literature, see <cit.> for a state of the art review. The presence of a small mass ratio parameter ϵ≪ 1 allows to perform the analysis perturbatively, order by order in ϵ. The actual motion of the secondary withing the curved spacetime generated by the primary will take the form of a forced geodesic equation: [fill=green!10,rounded corners,draw=green,thick](A) at (-0.25,0) D z^μ/τ=ϵ f^μ[g_αβ,z^α,T_αβ]; [fill=blue!10,rounded corners, draw=blue,thick] (B) at (-5.5,-2) f^μ=0: geodesic motion; [fill=purple!10,rounded corners, draw=purple,thick] (C) at (-0.25,-2) f^μ=f^μ_GSF: self-force corrections; [fill=red!10,rounded corners, draw=red,thick] (D) at (5.5,-2) f^μ=f^μ_spin: finite size effects; [ultra thick,->] (A.south)–(B.north); [ultra thick,->] (A.south)–(C.north); [ultra thick,->] (A.south)–(D.north); which shall be solved consistently with the field equations for obtaining the evolution of the position of the secondary z^μ(τ) over the whole inspiral, whose duration scales as 1/ϵ (this timescale is known as the radiation reaction time). Adapted techniques like the two timescale expansion <cit.> allow to solve perturbatively these equations. Actually, one can show that the orbital phase of the secondary schematically takes the form <cit.> ϕ =ϕ^(1)_avg 𝒪(ϵ^-1) +ϕ^(1)_osc+ϕ^(2)_avg+ϕ^(1)_spin 𝒪(ϵ^0) +ϕ^(2)_osc+ϕ^(3)_avg+ϕ^(2)_spin 𝒪(ϵ^1) +… The various contributions ϕ^(i)_… to the right hand side of this equation (originating from the forcing terms f^μ_… of the forced geodesic equation) have two main origins: * Self-force corrections: they are due to the self-interaction between the mass of the secondary and its own gravitational field. This is the main effect which has been addressed in the literature, see <cit.> and references therein, and the most difficult to treat. It is of prime importance, since it is its presence that will lead to the dissipative nature of the inspiral through the emission of gravitational waves. Meeting the LISA precision requirements (Δϕ∼ 1 rad) requires both the knowledge of the first order conservative piece of the self-force (leading to the term ϕ^(1)_osc), and to the first and second order averaged, dissipative parts of the self-force (ϕ^(i)_avg). * Finite size effects corrections: they originate from the non point-like nature of the secondary. The leading piece ϕ^(1)_spin of these corrections arises at 𝒪(ϵ^0) in the orbital phase, and is due to the spin (intrinsic angular momentum) of the compact body. As shown in <cit.>, discarding this term can lead to a dephasing of a few dozen of cycles over the whole inspiral. Actually, at the level of the equations of motion of the secondary, both self-force and finite size corrections arise at the same order, if one consider astrophysically realistic secondaries. As will be detailed in Part <ref> of this thesis, one can introduce a spin magnitude parameter 𝒮 defined from the body's spin dipole tensor S^μν as 𝒮^2≜1/2S_μνS^μν. One can show that the leading order spin force term then scales as f^μ_spin∼𝒮/μ M. However, a realistic astrophysical secondary will always spin at a rate lower than the one of a maximally spinning Kerr black hole[As we will see in Chapter <ref>, this maximal bound on the spin is necessary to prevent the appearance of a naked singularity in spacetime.], yielding 𝒮≤μ^2. Gathering these results allows to write f^μ_spin≲ϵ. The leading forcing term describing finite size effects therefore scales at most as the leading self-force term. Moreover, from this short reasoning, one see that 𝒮 itself can be formally used as a small expansion parameter. In this thesis, we will discard self-force effects but account for the leading and the first subleading finite-size corrections, thus considering extended test bodies. This approximation amounts to truncate the actual motion at zeroth order in the mass-ratio ϵ and at second order in the spin magnitude 𝒮. Therefore, for timescales much shorter than the radiation-reaction time 1/ϵ, it provides a valid approximation of the motion. Moreover, even if the independent study of finite size effects does not directly provide the motion over the whole inspiral, its results can in principle be used to inform the self-forced motion <cit.>. The first subleading finite size correction (quadratic in the spin) is due to the quadrupole moment of the secondary, its study deserves some attention since it is the first term of the expansion where the nature of the compact object plays a role. Moreover, since they appear at 2PN order in the post-Newtonian expansion <cit.> (see <cit.> for the 3PN order and <cit.> for a recent status) quadratic effects are generically relevant for gravitational waveform modeling of compact binaries. Finally, let us notice that being able to consider finite-size effects as corrections added on the top of geodesic motion in Kerr spacetime (describing the central, supermassive black hole) will be of prime importance for most of the analytical computations of this thesis. This originates from the fact that the Kerr geometry exhibits a hidden symmetry, not corresponding to any spacetime isometry, responsible for the integrability of geodesic motion and for the separability of various field equations. §.§.§ How this thesis is organized The vast majority of the chapters of this thesis aims to provide a pedagogical exposition of the subject, and will be denoted with a (P) in the upcoming plan. Nevertheless, a few chapters, denoted with a (T), contain technical derivations and details which can be skipped during a first reading of this work. They are nevertheless included in order to give a feeling of the computational complexity lying behind some results provided in this work. Along the text, some computations were also performed or checked thanks to the software Mathematica. The related notebooks are available on simple request. Part <ref> of the thesis is concerned with the zeroth order approximation for EMRIs motion, timelike geodesic motion in Kerr spacetime. Chapter <ref> (P) reviews the main features of the Kerr metric and of its timelike geodesics, while Chapter <ref> (P) describes the associated Hamiltonian formulation. Finally, Chapter <ref> (T) describes the classification of the polar geodesic motion in generic Kerr spacetime, as well as the radial motion for near-horizon geodesics of extremal Kerr black holes. In Part <ref>, we depart from the geodesic approximation and turn on spin and finite-size effects, which are studied for a generic curved background. Chapter <ref> (P) derives the equations of motion encompassing finite size effects, also known as Mathisson-Papapetrou-Dixon (MPD) equations. Chapter <ref> (P) discuss the physical significance of these equations from another point of view, namely the gravitational skeletonization, which amounts to replace the smooth compact body by a worldline endowed with a collection of multipoles. Chapter <ref> (P) describes the necessity of supplementing the MPD equations with supplementary algebraic conditions for obtaining a closed set of equations. These conditions are known as spin supplementary conditions and can be understood making some specific choice for the worldline upon which the multipole moments of the body are defined. Finally, Chapter <ref> (P) aims at deriving the structure of the quadrupole moment in the case where it is only induced by the proper rotation of the test body, which will be the approximation used for the remaining of the thesis (thus discarding the presence of any tidal-type effects). As any dynamical system, the understanding of the motion of finite size test bodies will be enormously facilitated if one is able to find quantities which are conserved along the motion. This is the core of the present thesis, to which Part <ref> is devoted. Chapter <ref> (P) aims at introducing a generic procedure for building conserved quantities directly from conservation constraint equations. Conserved quantities for MPD equations in Kerr spacetime at first and second order in the spin magnitude are respectively investigated in Chapters <ref> and <ref> (T). A readable summary of the results (P) can be found in the specific introduction to Part <ref>. The thesis ends by a rough discussion of a covariant Hamiltonian formulation of MPD equations, valid at quadratic order in 𝒮. Chapter <ref> (P) introduces the related phase space, Poisson brackets structure and symplectic coordinates relevant for coping with the problem. Chapter <ref> (P) deals with the construction of the Hamiltonian. The two last chapters discuss two applications of the Hamiltonian formulation: Chapter <ref> (P) is devoted to the non-integrability of MPD equations in Kerr spacetime, while Chapter <ref> reviews the solution of the associated Hamilton-Jacobi equation, and its relation with the constants of motion obtained in Part <ref>. §.§.§ Personal contributions The original contributions presented in this thesis are based upon the three following publications <cit.>: * G. Compère and A. Druart, “Near-horizon geodesics of high-spin black holes,” Phys. Rev. D 101 (2020) no.8, [erratum: Phys. Rev. D 102 (2020) no.2, 029901] [arXiv:2001.03478 [gr-qc]]: complete classification and discussion of the physical properties of both generic Kerr polar geodesic motion and near-horizon extremal Kerr radial geodesic motion. * G. Compère and A. Druart, “Complete set of quasi-conserved quantities for spinning particles around Kerr,” SciPost Phys. 12 (2022) no.1 [arXiv:2105.12454 [gr-qc]]: investigation of the conserved quantities for MPD equations at linear order in the spin magnitude, including the proof of uniqueness of Rüdiger's quadratic invariant in Kerr spacetime and discussion of the non-integrability of linearized MPD equations in Kerr spacetime. * G. Compère, A. Druart and J. Vines, “Generalized Carter constant for quadrupolar test bodies in Kerr spacetime,” [arXiv:2302.14549 [gr-qc]]: generalization of Rüdiger's deformed Carter constant to quadratic order in the spin magnitude. Moreover, Part <ref> also contains a few results that do not appear in the literature, at least to our knowledge: the covariant Hamiltonian for spin-induced quadrupole MPD equations (<ref>) and the discussion about the second order swing region solution to the associated Hamilton-Jacobi equation of Section <ref>. §.§.§ Conventions All along this text, we stay within the realm of General Relativity, choosing to follow the conventions of <cit.>. We will always consider a 4d Lorentzian manifold equipped with a metric g_μν. The metric signature is chosen to be (-+++). Unless otherwise stated, lowercase Greek indices run from 0 to 3 and denote spacetime indices. Lowercase Latin indices denote purely spatial indices and run from 1 to 3. Uppercase Latin indices represent tetrad indices. The Einstein summation convention is used everywhere. ∇_α denotes the covariant derivative with respect to the Levi-Civita connection, the Riemann tensor is defined such that ∇_α∇_βA_μ=-RλμαβA_λ and the Ricci tensor is R_μν=Rλμλν. Covariant derivatives will sometimes be denoted by a semicolon, while a comma might be used for partial derivatives. a plain arabic [width=]word_i.png PART: [Geodesic Motion in Kerr Spacetime]Geodesic Motion in Kerr Spacetime [lines=2]This first part will be devoted to the study of geodesic motion in Kerr spacetime. Chapter <ref> will describe the basics features of Kerr metric, discuss its geodesic equations and provide their formal solutions. We then turn to the discussion of geodesic motion in the near-horizon region of highly spinning black holes, which exhibits an enhanced symmetry group. Chapter <ref> then examines again the problem of geodesic motion in Kerr spacetime, tackling it from the perspective of Hamiltonian mechanics. This point of view will enable us to obtain in an easy way some fundamental results, including (i) the proof of complete integrability of Kerr geodesic motion, (ii) the derivation of the standard form of Kerr geodesic equations through solving the associated Hamilton-Jacobi equation and (iii) the formulation of equations of motion in terms of action-angle variables. Finally, in Chapter <ref>, we will establish an exhaustive and comprehensive classification of polar geodesic motion in generic Kerr spacetime, as well as radial motion in the near-horizon region of high spin Kerr black holes. Explicit solutions to the geodesic equations will also be provided. Before we start, this introduction briefly summarizes the history and the state of the art of each of these topics. §.§.§ Kerr geometry The exact solutions to Einstein Equations describing static black holes (namely, the Schwarzschild and Reissner-Nordström metrics) were discovered very quickly after the birth of General Relativity in 1915. However, it took nearly fifty years to discover a solution representing a rotating black hole, more accurate for depicting the astrophysical black holes that surrounds us. This solution is the Kerr metric, due to the New Zealand mathematician R. Kerr in 1963 <cit.>. It would be too long to include here a detailed review of the discovery of this solution and of all the subsequent developments it has generated. The interested reader should consult fruitfully the personal reminiscences of R. Kerr himself <cit.>, as well as the very nice historical and scientific review written by S. Teukolsky <cit.>. The exceptional features of Kerr solution come from the existence of uniqueness theorems <cit.> that give a fundamental importance to Kerr metric within the realm of General Relativity. Actually, it is the most generic solution of Einstein Equations in vacuum which is stationary and asymptotically flat. Kerr metric is therefore the stationary state predicted by General Relativity towards which are believed to tend all the massive stars collapsing into black holes, as well as the mergers of extremely compact objects (black holes and/or neutrons stars) that can be observed in our Universe, see Fig. <ref>. It therefore provides an unique possibility for testing Einstein's theory within the strong field regime. §.§.§ Geodesics in Kerr spacetime There are at least two motivations for studying geodesic motion in Kerr spacetime. First, null geodesics (together with the modeling of light sources) underpin the field of black hole imaging <cit.>, which has recently become an observational science <cit.>. Second, as already discussed in the main introduction, timelike geodesics provide the zeroth-order motion of binary systems in the perturbative small-mass-ratio expansion, which leads in the adiabatic approximation to the leading-order gravitational waveforms of extreme-mass-ratio inspirals (EMRIs) <cit.>. As we have seen, they are two directions in which this zeroth order geodesic motion can be refined in order to obtain a more accurate description of EMRIs dynamics: including gravitational self-force effects <cit.>, which are responsible for the dissipative character of the dynamics, finally leading to the coalescence of the system. Another refinement appearing at the same order in the small mass ratio expansion is the inclusion of corrections due to the finite size nature of the secondary. In realistic situations, the latter is not a point particle but a compact object that possesses some internal structure, which can be accurately described by the means of a tower of multipoles (mass, spin, quadrupole…). In astrophysically realistic situations, the appearance of this structure can be modelled by adding perturbative corrections to the geodesic dynamics. Because this subject will be the main focus of the subsequent parts of this thesis, a deep understanding of geodesic motion in Kerr spacetime appears to be of prime importance. The study of timelike and null geodesics of the Kerr metric has a long history which is still ongoing <cit.>, and is briefly summarized in the introduction of <cit.>. Two remarkable benchmarks are the discovery of the fourth constant of motion by B. Carter in 1968 <cit.>, which allowed the separation of the geodesic equations and the derivation of closed-form analytical solutions for bound geodesic motion by R. Fujita and W. Hikida <cit.>. In recent years, the community's efforts were mostly directed towards topics directly useful for the modelling of extreme mass-ratio inspirals (EMRIs) dynamics, such as understanding the location of the innermost stable spherical orbit (ISSO) in Kerr spacetime <cit.> or obtaining analytic expressions for plunging geodesics <cit.>. Another program that has been completed recently is the construction of a classification of all the possible forms that geodesic motion can exhibit in Kerr spacetime. This classification was initiated by the one of near-horizon geodesics <cit.>. It has succeeded into providing a complete classification of the polar geodesic motion <cit.> and, more recently, of generic radial motion <cit.>. §.§.§ Action-angle formulation Kerr bounded geodesic motion is well-known to be tri-periodic, in its radial, polar and azimutal directions. A formulation that directly reflects this behaviour is the action-angle formalism of Hamiltonian mechanics, which was applied to Kerr spacetime for the first time in the early 2000's <cit.>. The unfamiliar reader will find an introduction to action-angle formalism in the classical textbook of Goldstein <cit.>, together with a self-contained introduction to Hamiltonian mechanics and Hamilton-Jacobi theory. Arnold's classical text <cit.> reviews the fundamentals of the symplectic formulation of Hamilton's mechanics and provides the proof of the central result known as the Liouville-Arnold theorem. However, as will be review in Chapter <ref>, the non-compactness of level sets for geodesic motion in Kerr disables us to use directly these results. The generalized theorem for setting up action-angle formalism for non-compact level sets was developed in <cit.>. A brief description of this result in the context of EMRIs evolution (on which the present review is heavily based) can be found in E. Flanagan and T. Hinderer's classical paper <cit.>. Action-angle variables description of Kerr geodesics was initiated by W. Schmidt who derived explicit expressions for the fundamental frequencies and the action variables <cit.>. Coherent mathematical foundations were provided later by E. Flanagan and T. Hinderer <cit.>. This formulation of Kerr bounded geodesic motion lies at the heart of the two timescale analysis for describing self-forced EMRIs dynamics <cit.>. See also <cit.> for a recent review of the field. §.§.§ Near-horizon geometries The spin a of a Kerr black hole admits a maximal bound, given by its mass M (a^2≤ M^2). In the extremely high-spin limit, a throat-like geometry possessing a conformal (2,ℝ) symmetry shows up close to the event horizon of the hole <cit.>. This leads to the appearance of very peculiar physics in this region. The first studies of geodesic motion in the high-spin near-horizon region were restricted either to equatorial orbits <cit.>, to specific orbits <cit.>, or to parametrically generic geodesics <cit.> that discard relevant measure-zero sets in parameter space such as the separatrix between bound and unbound motion. Complete classifications of geodesics in the high-spin near-horizon Kerr region were then obtained in recent years, as well as the explicit solutions of the related equations of motion <cit.>. It is known since many years that any null orbit that enters or leaves the near-horizon region has a polar motion bounded by the minimal angle cos^2θ_min = 2 √(3)-3 (47^∘⪅θ⪅ 133^∘), which corresponds to the polar inclination of the velocity-of-light surface in the near-horizon and high-spin limit <cit.>. This property was recently proven to hold also for any timelike geodesic <cit.>. The polar motion is more restricted for the innermost bound spherical orbits (IBSOs): cos^2θ_min = 1/3 (55^∘⪅θ⪅ 125^∘) <cit.> and even more restricted for the innermost stable spherical orbits (ISSOs): cos^2θ_min = 3-2 √(2) (65^∘⪅θ⪅ 115^∘), as independently shown in <cit.>. Conformal symmetry in the near-horizon high spin Kerr geometry leads to potentially observable signatures if such high spin black holes are realized in nature. The behavior of null geodesics on the image of an extremely spinning Kerr black hole leads to the NHEKline <cit.> and to specific polarization whorls <cit.>. Gravitational waveforms on adiabatic inspirals lead to exponentially decaying tails at fixed oscillation frequencies with amplitudes suppressed as (1-a^2/M^2)^1/6 <cit.>, while plunging trajectories lead to impact-dependent polynomial quasinormal ringing with a power ranging from inverse time to square root of inverse time <cit.>. It was shown in <cit.> that conformal symmetry together with a discrete symmetry leads to equivalence classes of equatorial timelike geodesics with circular orbits as distinguished representatives. This allows to simplify the computation of Teukolsky waveforms by applying conformal transformations to the seed circular waveform <cit.>. In <cit.>, we showed that – in whole generality – conformal symmetry and discrete symmetries lead to equivalence classes which each admit spherical orbits as distinguished representatives. For orbits with an angular momentum lower than the ISSO one, no spherical orbit exists but a “complex spherical orbit” exists that generates the equivalence class. Such a complex spherical orbit can be used as a seed and complexified conformal transformations allow to reach all real subcritical geodesics. CHAPTER: KERR GEOMETRY AND ITS GEODESICS The main goal of this chapter is to set the stage for all the forthcoming developments of this thesis, by reviewing the main features of Kerr spacetime and of its geodesics and setting on the fly numerous notations and conventions that will be extensively used latter on. We will end by discussing a side topic, the near-horizon limits of (near-)extremal Kerr black holes, whose geodesic radial classification shall deserve an extended analysis in Chapter <ref>. § KERR METRIC AND ITS MAIN FEATURES One of the most widely used coordinate system for expressing the Kerr metric <cit.> are the Boyer-Lindquist coordinates (t,r,θ,φ) in which Kerr's solution reads <cit.> s^2 =- Δ(r) /Σ(r,cosθ)( t-asin^2θφ)^2+Σ(r,cosθ)( r^2/ Δ(r) + θ^2) +sin^2θ/Σ(r,cosθ)[(r^2+a^2) φ-a t]^2, Kerr metric with Δ(r) ≜ r^2-2Mr+a^2, Σ(r,cosθ)≜ r^2+a^2cos^2θ. Eq. (<ref>) actually provides us with a two-parameters family of solutions to the Einstein equations, that we will denote (M,a). Using e.g. Komar integrals, one can show that M can be interpreted as the mass of the black hole, whereas a is its angular momentum per unit of mass, a=J/M <cit.>. The interested reader can find a readable derivation of the metric Eq. (<ref>) in Carter's contribution to Les Houches proceedings <cit.>. For numerical symbolic evaluation, it is often useful to get rid of the trigonometric functions appearing in Eq. (<ref>) by using the variable[Actually, depending on the context, we will sometimes define z≜cosθ or z≜cos^2θ. The former is well-suited when dealing with the equations of motion, which are better expressed in terms of cosθ because it allows to keep track of the direction ±_θ of the motion, whereas the latter is mostly useful for establishing the classification of polar geodesic motion, since the corresponding potential is a function of cos^2θ.] z≜cosθ instead of θ. In terms of the coordinates (t,r,z,φ), the non-vanishing components of the Kerr metric read g_tt =-Δ(r) /Σ(r,z), g_rr=Σ(r,z)/Δ(r) , g_zz=Σ(r,z)/1-z^2, g_φφ = 1-z^2/Σ(r,z)[(r^2+a^2)^2-a^2 Δ(r) (1-z^2)], g_tφ=g_φ t=2aMr(z^2-1)/Σ(r,z) whereas the components of the inverse metric are g^tt =-1-2Mr(r^2+a^2)/Σ(r,z) Δ(r) , g^rr=Δ(r) /Σ(r,z), g^zz=1-z^2/Σ(r,z), g^φφ =1/Σ(r,z)(1/1-z^2-a^2/Δ(r)), g^tφ=g^φ t=-2aMr/Σ(r,z)Δ(r). We finally notice that the determinant of the metric is simply g≜ g_μν=-Σ(r,z)=-(r^2+a^2z^2). §.§ Existence and uniqueness of Kerr solution The power of the Kerr solution is mathematically stated through the Carter-Robinson theorem <cit.>: Any asymptotically flat, static and axisymmetric solution to vacuum Einstein equations which is non-singular on and outside its event horizon is a member of (M,a). The strength of this result can be further enhanced by convoking Hawking rigidity theorem: under certain assumptions on the matter fields present in spacetime (and which are verified in vacuum), any stationary solution of Einstein equations is axisymmetric <cit.>. The combination of these two results has stringent consequences if we stay within the realm of General Relativity: Kerr metric is the most generic stationary, asymptotically flat solution to vacuum Einstein Equations that is non-singular on and outside its event horizon. Macroscopic astrophysical objects being expected to be electromagnetically neutral, Kerr solution is therefore largely believed to be the most generic stationary state reached after the gravitational collapse of enough massive stars or the merging of extremely compact objects (black holes and/or neutron stars). Notice that this statement is not a direct consequence of the uniqueness theorems discussed above, a large amount of physics shall be convoked for reaching this conclusion <cit.>. This prominent status of the Kerr solution largely accounts for the huge, commu-nity-scaled effort put in understanding its properties since its discovery, and corroborates the central role that its study will deserve in the present thesis: in studying the motion of bodies around Kerr black holes, we are really saying something about the astrophysical systems involving black holes that surround us. Another striking feature of Kerr solution is that it has “no hair”: the entire geometry of the spacetime (that is, the entire gravitational field of the black hole) is only characterized by two parameters, its mass and its spin. Any multipole expansion of the spacetime will lead to a collection of multipole moments whose value is entirely fixed by M and a <cit.>. In our analysis, this will also be what happens when we will consider the test body as a Kerr black hole itself: the form and the coupling strength of its high order multipole moments (quadrupole and higher) will be entirely determined by the knowledge of its linear momentum (monopole) and spin (current-type dipole), see Chapter <ref>. §.§ Generic properties of Kerr spacetime Before going further, let us have a look at some generic features of the Kerr metric. A first point of interest is to get a bit more intuition about the meaning of Boyer-Lindquist coordinates. In the M→ 0 limit, Kerr metric (<ref>) reduces to flat (Minkowski) spacetime expressed in ellipsoidal coordinates <cit.>. Moreover, in the spinless limit a→ 0, the geometry reduces to Schwarzschild solution (that is, (M,0)=(M)) and the Boyer-Lindquist coordinates reduce to standard Schwarzschild ones: s^2a→ 0⟶-f(r) t^2+ r^2/f(r)+r^2 Ω^2, with f(r)≜ 1-2M/r and Ω^2≜θ^2+sin^2θφ^2. Therefore, t takes the interpretation of the proper time of an asymptotically far away observer, r is the radius to the origin of the spacetime and (θ,φ) are respectively polar and azimutal angles. As it has been already stated numerous times, Kerr spacetime is a black hole spacetime. It is characterized by two event horizons located at radial distances r_±≜ M±√(M^2-a^2) which are known as the outer (r=r_+) and the inner (r=r_-) horizons, since r_+≥ r_-. In the continuation of this thesis, we shall only be concerned with phenomena occurring in the exterior Kerr spacetime, i.e. phenomena taking place in the region of spacetime located outside of the outer event horizon, at r>r_+. Finally, in order to avoid the appearance of a naked singularity (and thus to prevent violating the Cosmic Censorship Conjecture), the outer horizon radius described by Eq. (<ref>) should be a real number. This enforces the magnitude of the black hole spin to be bounded by its mass, a≤ M. The special case of a maximally spinning black hole (also referred to as an extremal black hole) a=M deserves special attention, since the structure of the spacetime will dramatically change close to the event horizon of the hole, as it will be extensively discussed in Section <ref>. §.§ Symmetries: Killing vectors and tensors In this thesis, one of our main concerns with respect to the Kerr metric is the study of its symmetries. This is their existence that will allow to solve the equations of motion for test bodies. They are also closely related to the existence of conserved quantities along the motion and to its integrability properties, as will be extensively discussed in the continuation of this text. As always in differential geometry, isometries of the spacetime will be encoded under the form of Killing vectors. These symmetries are explicit symmetries of the metric. However, Kerr geometry also possesses other symmetries – referred to as hidden symmetries – which are generated by higher rank Killing-type tensorial objects. Even if these symmetries are not “true” symmetries of the metric, they play a fundamental role for studying the motion in Kerr spacetime since they also provide us with conserved quantities and are strongly related to the separability of the Hamilton-Jacobi equation. Table <ref> summarizes the defining equations for Killing vectors, Killing and Killing-Yano rank 2 tensor as well as their conformal counterparts. These objects are not all independent one of another, but the existence of one type of Killing object often imply the existence of others. Actually, The following statements hold: * If Y_μν is a Killing-Yano tensor, then its Hodge dual Y^*_μν≜1/2ϵ_μνρσY^ρσ is a conformal Killing-Yano tensor. * If Y_μν is a Killing-Yano tensor, then K_μν≜ YμλY_νλ is a Killing tensor. * If Y_μν is a conformal Killing-Yano tensor, then K_μν≜ YμλY_νλ is a conformal Killing tensor. * If Y_μν is a Killing-Yano tensor or a conformal Killing-Yano tensor, then K_μν≜ YμλY^*_νλ is a conformal Killing tensor. * In Ricci-flat spacetimes (R_μν=0), if Y_μν is a conformal Killing-Yano tensor, then ξ^μ=-1/3∇_λ Y^λμ is a Killing vector. The proofs of some of these assertions – as well as many other properties of spacetimes admitting Killing-Yano tensors – will be provided in Section <ref>. We now list the various Killing-type objects that appear in Kerr spacetime. §.§.§ Explicit symmetries Since Kerr spacetime is stationary, it admits a timelike Killing vector ξ≜∂_t. Its axisymmetric character leads to the existence of another Killing vector, which is simply η≜∂_φ. Notice that there exists also a discrete ℤ_2 symmetry – that we shall refer to as the ↑↓-flip – which flips the sign of the time and the azimutal coordinates: ↑↓: t→-t, φ→-φ. Physically, it corresponds to “rewind” the movie in time, thus considering a black hole of opposite spin flowing backwards in time. This symmetry will play an important role when we will consider conformal mapping between the near horizon geodesics of highly spinning Kerr black holes, see Chapter <ref>. §.§.§ Hidden symmetries In Petrov's algebraic classification <cit.>, Kerr spacetime turns out to be of type D, thus possessing two distinct, doubly degenerated, principal null directions that we will denote l^μ and n^μ. Building a null orthonormal tetrad by adding two (self-conjugated) complex null directions m^μ and m̅^μ, the metric can be written in terms of the null tetrad legs as g_μν=-2 l_(μn_ν)+2 m_(μm̅_ν). A common choice for the null tetrad is Kinnersley tetrad <cit.>, which reads, in Boyer-Lindquist coordinates l^μ =1/Δ(r^2+a^2,Δ,0,a), n^μ=1/2Σ(r^2+a^2,-Δ,0,a), m^μ =1/√(2)ℛ(iasinθ,0,√(1-z^2),i/sinθ). Here, the scalar quantity ℛ is defined as[Notice that ℛ is directly related to the only non-vanishing Kerr's Weyl scalar Ψ_2 by Ψ_2=-M/ℛ̅^3.] ℛ≜ r+iacosθ. Kerr spacetime admits a Killing-Yano tensor which is advantageously written in terms of the null directions as Y_μν=i(ℛ-ℛ̅)l_[μn_ν]-i(ℛ+ℛ̅)m_[μm̅_ν]. In Boyer-Lindquist coordinates, this becomes 1/2Y_μν x^μ∧x^ν = a cosθr ∧( t - a sin^2θφ) + r sinθθ∧[ ( r^2+a^2) φ- a t]. Kerr Killing-Yano tensor This allows us to define a Killing tensor as K_μν =YμλY_νλ =-1/2(ℛ-ℛ̅)^2l_(μn_ν)+1/2(ℛ+ℛ̅)^2m_(μm̅_ν) =2Σ l_(μn_ν)+r^2 g_μν. The last equality allow to express the Killing tensor in terms of only manifestly real quantities and is obtain from the second line by using the decomposition Eq. (<ref>) of the metric and noticing that ℛ^2=Σ. Kerr's Killing tensor is often presented as the fundamental quantity related to Kerr spacetime hidden symmetry. Let us stress that, in the light of Proposition <ref>, we see that (conformal) Killing-Yano tensor appear to be more fundamental objects that Killing tensors, since their existence allow to systematically build a (conformal) Killing tensor, and even a Killing vector in Ricci-flat spacetime. However, the converse is not true: any Killing tensor cannot be written as the contraction of two Killing-Yano tensors. Collinson <cit.> worked out necessary and sufficient conditions under which such a decomposition is realizable, which are satisfied in the Kerr case. Regarding test motion in Kerr spacetime, the fundamental nature of the Killing-Yano tensor only shows up when studying the motion of spinning test bodies: at geodesic level, the knowledge of the Killing tensor is sufficient to construct a constant of motion – the Carter constant – that will allow the equations of motion to be solvable, as we will discuss in this chapter and in the following one. However, generalizing this Carter constant to include spin effects will explicitly require Killing tensor to be written as the contraction of two Killing-Yano tensors, as we will see in full details in Part <ref> of this text. Moreover, despite Kerr Killing tensor becomes reducible to combinations of Killing vectors in the Schwarzschild limit a→ 0, the Kerr Killing-Yano tensor does not and thus remains a hidden symmetry of Schwarzschild spacetime. Finally, notice the remarkable identity K^μνξ_ν=a(aξ^μ+η^μ)≜η̃^μ, which relates the two Kerr Killing vectors thanks to the Killing tensor. The right hand side η̃^μ is itself a Killing vector. § GEODESIC EQUATIONS In this section, we will have a first look at Kerr geodesic equations, depicting how the symmetries discussed above allow to write them as a set of four first order ODEs. We will then briefly show how formal solutions to the geodesic equations can be written. These solutions will be the cornerstone used to discuss classification of Kerr geodesics and their explicit solutions, as we will undertake in Chapter <ref>. Hereafter, we restrict to timelike geodesics, describing the physical motion of massive, point-wise test particles. This is the relevant problem to study, since it is the “zeroth order” approximation for describing the motion of massive compact objects around a Kerr black hole. However, notice that the study of null geodesics – of prime importance for the fields of ray tracing and black hole imaging – only deserves minor modifications with respect to the timelike case, see <cit.> for more details. §.§ Conserved quantities along geodesics Let us first introduce a bit of formalism. We denote z^μ(τ) the particle's worldline and v^μ≜z^μτ its four-velocity. The parameter τ will always denote the proper time, other parametrizations being usually denoted by the symbol λ. As usually, one has v_μ v^μ=-1. The particle's linear momentum is p_μ=μ v_μ, with μ>0 the dynamical mass of the particle, which leads to p_μ p^μ=-μ^2. For any vector non-null X^μ, we denote X̂^μ≜X^μ/√(X_μ X^μ) its normalized version. In particular, one obtains directly p̂^μ≜p^μ/μ=v^μ. This relation between the normalized linear momentum and the four-velocity is of course trivial in this case, where it follows simply from the definitions given above. We can therefore equivalently use the linear momentum or the four-momentum in our computations. However, as we will see in Part <ref>, it will acquires a totally new meaning when we shall include spin effects, the latter breaking the parallelism between linear momentum and four-velocity. A function of the dynamical variables Q=Q(x^μ,p_μ) is conserved along the worldline z^μ(τ) if Qτ=0 ⇔ D Q/τ≜ v^λ∇_λ Q=0. In what follows, we will always consider “proper” constants of motion, i.e. constants normalized by the right power of μ to make them independent of the particle's mass. It amounts to consider constants of motions “per unit of test mass”, and will enable us to get rid of the μ factors appearing in the equations. There appears to be four independent quantities that are conserved along geodesic motion in Kerr spacetime: * The dynamical mass μ, since it is defined as (minus the square of) the norm of the geodesic tangent vector; * As it is well-known, for any Killing vector X^μ, the quantity p_μ X^μ is conserved along geodesic motion. In Kerr spacetime, this leads to the two conserved quantities E_0≜ -p̂_μξ^μ, L_0≜p̂_μη^μ, which respectively take the interpretation of particle's (proper) energy and projection of (proper) angular momentum onto the rotation axis of the black hole. * The existence of the Killing tensor (<ref>) directly implies that K_0≜ K_μνp̂^μp̂^ν is conserved along geodesics. For subsequent purposes, it is useful to define the shifted quantity Q_0 = K_0- (L_0 - a E_0)^2. This is nothing but the celebrated (proper) Carter constant,[Notice that there is sometime a confusion in the literature about which of Q_0 and K_0 is called “Carter constant”. With respect to the original paper of Carter <cit.>, our K_0 is Carter's 𝒦/μ^2, whereas our Q_0 is his Q/μ^2.] which was discovered by Carter <cit.> as the separation constant of the geodesic Hamilton-Jacobi equation in Kerr spacetime, see Section <ref> for a discussion of this approach. The existence of these four constants allows to reduce Kerr timelike geodesic motion to the following set of four first-order ODEs: Σ t τ̃ =a(L_0 -a E_0sin^2θ)+( r ^2+a^2)P_0( r )/ Δ( r ), Σ r τ̃ =±_r√(R( r )), Σ cosθτ̃ =±_θ√(Θ(cos^2θ)), Σ φτ̃ =-a E_0+ L_0 ^2θ+aP_0( r )/ Δ( r ), Kerr timelike geodesic equations with τ̃≜μτ and where we have defined P_0( r ) ≜ E_0( r ^2+a^2)-a L_0, R( r ) ≜ P_0^2( r )- Δ ( r )( K_0+r ^2), Θ(cos^2θ) ≜ Q_0 (1-cos^2θ)+ cos^2θ[a^2(E_0^2-1) (1-cos^2θ)- L_0^2]. Moreover, one can show that Carter constant Q_0 takes the value Q_0=p̂_θ^2+cos^2θ[a^2(1-E_0^2)+(L_0/sinθ)^2]. Carter constant Even if all these results can be directly obtained from tricky algebraic manipulations of the conserved quantities, their easiest derivation is obtained through Hamilton-Jacobi formulation, and will be discussed in Section <ref>. §.§ Formal solutions In a given Kerr geometry (M,a), a geodesic is fully characterized by the quadruplet of parameters (μ≥ 0,E_0 ≥ 0, L_0 ∈ℝ,Q_0 ≥ -(L_0 - a E_0)^2), its initial spacetime position and two signs, s_r^i ≡±_r̂|_τ = τ_i, s_θ^i ≡±_θ|_τ = τ_i, that correspond to the signs of the radial and polar velocity at the initial time τ_i. The motion can be integrated using Mino time <cit.> defined as λ≜τ̃/Σ thanks to the property λ = r/±_ r√(R( r )) = cosθ/±_θ√(Θ(cos^2θ)). We consider a timelike geodesic path linking the initial event ( t _i, r _i,θ_i, φ _i) at Mino time λ_i and the final event ( t _f, r _f,θ_f, φ _f) at Mino time λ_f. The geodesic can be formally integrated as t(λ_f)-t(λ_i) = a( L_0 - a E_0)(λ_f-λ_i) +a^2 E_0 ( T_θ(λ_f)-T_θ(λ_i) )+T_ r(λ_f)-T_ r(λ_i) , φ (λ_f) - φ (λ_i) = ( L_0 - a E_0) (λ_f-λ_i) + L_0 ( Φ_θ (λ_f)-Φ_θ (λ_i)) + a (Φ_r(λ_f)-Φ_r(λ_i)) where λ = r/±_ r√( R( r )) = cosθ/±_θ√(Θ(cos^2θ)), T_θ(λ) ≜cos^2 θ cosθ/±_θ√(Θ (cos^2 θ)), Φ_θ(λ) ≜cosθ/±_θ√(Θ (cos^2θ))(^2 θ-1), T_r(λ) ≜(r^2 + a^2)P_0(r) r/±_r√(R( r ))Δ ( r ), Φ_r(λ) ≜P_0(r) r/±_r√(R( r ))Δ ( r ). The notation (already used in <cit.>) indicates that the signs ±_r and ±_θ are flipped each time a zero of R and Θ, respectively, is encountered. Since the signs ±_r, ±_θ are identical to the signs of r and cosθ, respectively, the integral (<ref>) is monotonic around each turning point, as it should be in order to define an increasing Mino time λ along the geodesic. Note that T_θ, Φ_θ are normalized to be vanishing for equatorial motion. The initial signs s_r^i ≡±_r̂|_λ = λ_i, s_θ^i ≡±_θ|_λ = λ_i, as well as the initial spacetime position, are fixed as a part of the specification of the orbit. If we denote by w(λ),m(λ) the number of turning points in the radial and polar motion, respectively, at Mino time λ, then as the velocity changes sign at each turning point, ±_r = s_r^i (-1)^w, ±_θ = s_θ^i (-1)^m. § NEAR-HORIZON GEODESICS OF HIGH SPIN KERR When we start considering the behaviour of geodesics close to the horizon of Kerr black holes whose spin is close to its maximal value (i.e. a^2→ M^2), a totally unexpected behaviour of Kerr geometry shows up, leading to the appearance of a throat-like geometry near the horizon. For exactly extremal black holes (a^2=M^2), the spacetime separates in three independent, geodesically complete spacetimes: the pre-existing Kerr spacetime, and two new spacetimes called Near Horizon Extremal Kerr (NHEK) and near-NHEK. This remarkable fact was first noticed by Bardeen, Press and Teukolsky in 1972 <cit.>. §.§ Kerr ISCO, and the need for near-horizon limits in the high spin regime To reach this conclusion, let us start by studying the behaviour of some specific circular (that is equatorial, θ=π/2) geodesics in the high spin limit. The deviation from extremality of the black hole can be characterized by the parameter λ≜√(1-a^2/M^2), which tends to 0 as a^2→ M^2. The condition for having a circular orbit is given by R(r)=0, et R'(r)=0, which can be solved for E_0 and L_0 as <cit.> E_0 =r^3/2-2Mr^1/2± aM^1/2/r^3/4√(r^3/2-3M r^1/2± 2aM^1/2), L_0 =± M^1/2(r^2∓ 2a M^1/2r^1/2+a^2)/r^3/4√(r^3/2-3M r^1/2± 2aM^1/2). The upper (resp. lower) signs correspond here to prograde (L_0>0) (resp. retrograde (L_0<0)) orbits. The existence of such orbits therefore require the condition r^3/2-3Mr^1/2± 2aM^1/2≥ 0 to hold. It is possible to show that this inequality is only saturated for null geodesics (since E_0 scales as 1/μ, which implies E_0→∞ for μ→ 0). We now review the near-extremal behaviour of some specific prograde circular geodesics. More details may be found in <cit.>. * The Innermost Stable Circular Orbit (ISCO) is characterized by the stability condition E_0r_r=r_=0. Close to extremality, the ISCO radius scales as r_=M(1+2^1/3λ^2/3)+𝒪(λ). Section <ref> will study the generalization of the ISCO to non-equatorial orbits, the Innermost Stable Spherical Orbit (ISSO). * Wilkins showed <cit.> that a necessary condition for a Kerr timelike geodesic to be radially bounded was E_0<1. A consequence of this result is that the Innermost Bounded Circular Orbit (IBCO) can be determined by the condition E_0=1. Near extremality, we obtain r_=M(1+√(2)λ)+o(λ). * A last case of interest is the light ring, which is the outermost circular orbit for which circular geodesics are null. This amounts to find the largest root of r^3/2-3Mr^1/2± 2aM^1/2=0, which gives r_γ=M(1+2/√(3)λ)+o(λ) close to extremality. Let us now compare these results to the outer horizon radius, which can be written as r_+=M(1+λ). In the Schwarzschild black hole, r_=6M, r_=4M and r_γ=3M remain all larger than the horizon radius r_+=2M. These radii are monotonically decreasing functions of the black hole spin a, and both tend to r=M for a maximally spinning black hole. This fact appears paradoxical, since the ISCO and the IBCO are timelike curves, but seem to be projected onto the light ring and the event horizon in the λ→ 0 limit, which are null submanifolds. We also notice that, close to extremality, the first subleading term in the radius expansion scales as λ^2/3 for the ISCO, whereas it scales as λ for both the IBCO, the light ring and the event horizon. We will resolve the geometry around these orbits by introducing a new system of coordinates (t̂,r̂,θ,φ̂) defined as (see <cit.> for a review) t̂≜t/2Mκλ^p, r̂≜κ/M(r-r_+)λ^-p, φ̂≜φ-t/2M, where κ>0 is a scale factor. The new coordinate system is tuned to be adimensional, comoving (the angular velocity φ̂/t̂ vanishes on the event horizon) and possesses a radial coordinate which is vanishing on the event horizon. The parameter p shall be set to some specific value in order to resolve the geometry around some specific orbit. We will make the following choices: * p=2/3 will resolve the geometry around the ISCO, and gives rise to a new geometry, called Near Horizon Extremal Kerr (NHEK); * p=1 resolves the geometry around the IBCO, the light ring and the outer event horizon. It gives rise to the so-called near-NHEK geometry. Standing on the ISCO in the extremal limit λ→ 0, the proper radial distance to all the near-NHEK and the asymptotic orbits (characterized by r>r_+) becomes infinite. Moreover, Bardeen and Horowitz showed <cit.> that NHEK and near-NHEK spacetimes turn on to be geodesically complete. Everything thus happens as if the original Kerr spacetime was splitting in three independent geometries, namely near-NHEK, NHEK and asymptotic (r>r_+) Kerr in this limit. It is then physically meaningful to study geodesic motion in (near-)NHEK spacetimes on their own, which is the subsequent task we will undertake. §.§ Near-horizon extremal Kerr (NHEK) Suitable coordinates for resolving the ISCO are defined as T = t /2κ Mλ^2/3, R =κ(r - r _+)/M λ^2/3, Φ= φ - t /2M . Plugging (<ref>) into the Kerr metric (<ref>) and expanding the result in powers of λ gives the NHEK spacetime in Poincaré coordinates s^2=2M^2Γ(θ)(-R^2 T^2+ R^2/R^2+θ^2+Λ^2(θ)(Φ+R T)^2)+𝒪(λ^2/3) where Γ(θ) ≜1+cos^2θ/2, Λ(θ) ≜2sinθ/1+cos^2θ. In the λ→ 0 limit, NHEK metric is independent of the scale factor κ and thus invariant under the scale transformation R→κ R, T→κ^-1T. We therefore conventionally set κ=1 in all the NHEK formula. In these NHEK coordinates, the Kerr ISCO is mapped onto the circular orbit of radius R_=2^1/3. However, considering the NHEK geometry on its own, the scale symmetry (<ref>) makes all circular orbits equivalent, since their can be all mapped onto another. It is really the coordinate change Eq. (<ref>) (i.e. the way we map Kerr spacetime to NHEK one) which gives a specific meaning to this peculiar radius. The NHEK geometry admits a (2,ℝ) ×(1) symmetry generated by ∂_Φ and H_0 = T ∂_T - R ∂_R, H_+ = ∂_T, H_- = (T^2 + 1/R^2)∂_T -2 T R ∂_R - 2/R∂_Φ, with H_0 the generator of the scale symmetry Eq. (<ref>). The Killing tensor K_μν (<ref>) becomes reducible <cit.> and can be expressed as K^μν = M^2 g^μν + 𝒞^μν + (∂_Φ)^μ (∂_Φ)^ν where the (2,ℝ) Casimir is given by 𝒞^μν∂_μ∂_ν = -H_0 H_0 + 1/2 (H_+ H_- + H_- H_+). We are interested in the Kerr geodesics that exist in the near-extremal limit within the NHEK geometry at leading order in λ. The NHEK angular momentum L_0 and Carter constant Q_0 are identical to their values defined in Boyer-Lindquist coordinates. The NHEK energy E is related to the Boyer-Lindquist energy E_0 as E_0 =L_0/2M+λ^2/3/2ME. From now on, we will consider the leading high-spin limit; i.e. we will neglect all 𝒪(λ^2/3) corrections in (<ref>). §.§.§ Geodesics In the NHEK geometry, Mino time is defined as λ≜∫^τ̃τ̃'/2M^2Γ(θ(τ̃')), and the geodesic equations of motion simplify to Tλ =E/R^2+L_0/R, Rλ =±_R√(v_R(R)), cosθλ =±_θ√(v_θ(cos^2θ)), Φλ = L_0 /Λ^2-E/R- L_0 , NHEK geodesic equations with v_R(R) ≜ E^2+2E L_0 R+R^2/4(3 L_0 ^2-4(Q_0+ M^2)), v_θ(cos^2θ) ≜ Q_0 sin^2θ +cos^2θsin^2θ( L_0 ^2/4-M^2)- L_0 ^2cos^2θ. The limitation that E remains real and finite implies from (<ref>) that we are only considering orbits with energy close to the extremal value L_0 /(2M). The ISSO angular momentum at extremality is equal to L_* = 2/√(3)√(M^2 + Q_0) It will play a key role in the following. On the equatorial plane (θ=π/2⇒ Q_0=0), the definition reduces to the ℓ_0 used in Refs. <cit.>. We can also write down more simply v_R(R)= E^2+2E L_0 R-𝒞 R^2 where 𝒞 is the conserved quantity obtained from the (2,ℝ) Casimir, 𝒞≜𝒞^μν P_μ P_ν = Q - 3/4 L_0 ^2 + M^2 = 3/4 ( L_*^2 - L_0 ^2). We also have v_R(R)={[ -𝒞 (R-R_+)(R-R_-), 𝒞≠ 0 ;; 2E L_0 (R-R_0), 𝒞=0 , ]. with R_±≜E/𝒞 L_0 ±| E |/|𝒞|√( L_0 ^2+𝒞), R_0≜ -E/2 L_0 . The non-negative Carter constant K_0 is K_0 = Q_0+ L_0 ^2/4 > 0, which implies that 𝒞 > - L_0 ^2 and that R_- < R_+ with R_± both real. These equations all agree with Ref. <cit.>. Similarly, defining z≜cos^2θ, one can rewrite the polar potential as v_θ(z)= - L_0 ^2 z+(Q_0+𝒞_∘ z)(1-z)={[ (Q_0+ L_0 ^2)(z_0-z) for 𝒞_∘=0; 𝒞_∘(z_+-z)(z-z_-) for 𝒞_∘≠ 0 ]. where 𝒞_∘ is defined through the critical value of the angular momentum L_∘: 𝒞_∘≜ L_0 ^2- L ^2_∘/4, L_∘≜ 2M. The roots of the polar potential are given by z_0≜Q_0/Q_0+ L_0 ^2 z_±≜Δ_θ±sign(𝒞_∘) √(Δ_θ^2+Q_0/𝒞_∘), Δ_θ≜1/2(1-Q_0+ L_0 ^2/𝒞_∘). §.§.§ Formal solution to the geodesic equations Using the same reasoning as for the Kerr geometry, the formal solutions to the geodesic equations are given by λ_f -λ_i = T^(0)_R(R_f)-T^(0)_R(R_i) = λ_θ(θ_f)-λ_θ(θ_i) , T(λ_f) -T(λ_i) = E ( T^(2)_R(R(λ_f))-T^(2)_R(R(λ_i))) + L_0 ( T^(1)_R(R(λ_f))-T^(1)_R(R(λ_i))), Φ(λ_f)-Φ(λ_i) =-3/4 L_0 (λ_f-λ_i) - E ( T^(1)_R(R(λ_f))-T^(1)_R(R(λ_i)))  + L_0 ( Φ_θ(θ(λ_f))-Φ_θ(θ(λ_i))) where the three radial integrals are T^(i)_R(λ) ≜ R/±_RR^i√(E^2+2E L_0 R-𝒞 R^2) , i = 0,1,2 and the two polar integrals are λ_θ(λ) ≜cosθ/±_θ√(v_θ(θ)), Φ_θ(λ) ≜cosθ/±_θ√(v_θ(θ))( 1/Λ^2(θ) - 1/4). The notation was explained previously. We defined Φ_θ such that it is zero for equatorial orbits (since Λ(π/2)=2). After integration, the equation (<ref>) can be inverted to give R(λ) and θ(λ). We need to solve these five integrals as a function of the geodesic parameters E, L_0 ,Q_0,s_θ^i,s_R^i,T_i,R_i,θ_i,Φ_i. §.§ Near-NHEK §.§.§ Metric and geodesic equations We now turn to the study of near-NHEK spacetime, which resolves a closer neighborhood of the black hole event horizon. We introduce the so-called near-NHEK coordinates (t̂,r̂,θ,φ̂), related to Boyer-Lindquist coordinates through the relations t̂ = t /2Mκλ, r̂=κ/M( r - r _+)λ^-1, φ̂= φ - t /2M. At leading order in λ, the metric becomes s^2 =2M^2Γ(θ)(-r̂(r̂+2κ)t̂^2+r̂^2/r̂(r̂+2κ)+θ^2+Λ^2(θ)(φ̂+(r̂+κ)t̂)^2) +𝒪(λ). In these coordinates, the outer event horizon is located at r̂=0, the light ring at r̂_γ=2/√(3)-1 and the IBCO at r̂_=√(2)-1. Though the metric explicitly depends upon κ, no physical quantity depends upon it, since it is introduced from a coordinate transformation. The corresponding geodesic equations are now given by t̂λ =e+ L_0 (r̂+κ)/r̂(r̂+2κ), φ̂λ =-e(r̂+κ)+ L_0 κ^2/r̂(r̂+2κ)+ L_0 (1/Λ^2-1), θλ =±_θ√(v_θ(cos^2 θ)), r̂λ =±_r̂√(v_r̂;κ(r̂)) Near-NHEK geodesic equations where the radial potential can be written as v_r̂;κ(r̂) ≜ (e+ L_0 κ)^2+2e L_0 r̂+3/4( L_0 ^2- L_* ^2)r̂(r̂+2κ ) and the angular potential is still as given in (<ref>). Although r̂ is a meaningful radial coordinate (because of the horizon location at r̂=0), it is convenient to introduce the shifted radial variable R≜r̂+κ to get more elegant expressions. The symbol R is also used in NHEK, but the context allows us to distinguish them. The generators of (2,ℝ)×(1) are ∂_φ̂ and H_0=1/κ∂_t̂, H_±=exp(∓κt̂)/√(R^2-κ^2)[R/κ∂_t̂±(R^2-κ^2)∂_R-κ∂_φ̂]. The (2,ℝ) Casimir 𝒞^μν∂_μ∂_ν takes the form (<ref>), where the vectors are now given by (<ref>). The radial potential can be recast as v_R;κ(R)={[ -𝒞(R-R_+)(R-R_-), 𝒞≠ 0 ;; 2e L_* (R-R_0), 𝒞=0 ]. where R_±≜e L_0 /𝒞±√((𝒞+ L_0 ^2)(e^2+κ^2𝒞))/|𝒞|, R_0≜-e^2+κ^2 L_*^2/2e L_*. The near-NHEK energy e is related to Boyer-Lindquist energy by E_0= L_0 /2M+λ/2Mκe. The near-NHEK and Boyer-Lindquist angular momenta and Carter constants Q_0 are equal. We define again the critical radius R_c = -e/ L_0 . and the future orientation of the orbit again requires (<ref>). §.§.§ Solutions to the equations of motion The NHEK and near-NHEK geodesic equations being very similar, this section will only briefly point out the similarities and the differences between the two cases. The formal solutions to the near-NHEK geodesic equations are λ_f-λ_i =t^(0)_R;κ(R_f)-t^(0)_R;κ(R_i)=λ_θ(θ_f)-λ_θ(θ_i), t(λ_f)-t(λ_i) =e(t^(2)_R;κ(R(λ_f))-t^(2)_R;κ(R(λ_i)))  + L_0 (t^(1)_R;κ(R(λ_f))-t^(1)_R;κ(R(λ_i))), φ̂(λ_f)-φ̂(λ_i) =-3/4 L_0 (λ_f-λ_i)-e (t^(1)_R;κ(R(λ_f))-t^(1)_R;κ(R(λ_i)))  -κ^2 L_0 ( t^(2)_R;κ(R(λ_f))- t^(2)_R;κ(R(λ_i)))  + L_0 (Φ_θ(θ(λ_f))-Φ_θ(θ(λ_i))) where the polar integrals are the same as in NHEK (see above) and the radial ones are defined by t^(0)_R;κ(λ) ≜ R/±_R√(v_R;κ(R)), t^(i)_R;κ(λ) ≜ R/±_R√(v_R;κ(R))R^2-i/R^2-κ^2, i=1,2. Notice that NHEK geodesics equations can be recovered from near-NHEK ones by taking the formal limit κ→ 0; the normalization of the radial integrals has been chosen to satisfy lim_κ→ 0 t^(i)_R;κ(R)=T^(i)(R) (i=0,1,2) as defined in (<ref>). Therefore, the formal solutions to NHEK geodesic equations can also be recovered by taking the limit κ→ 0 in (<ref>) and (<ref>). § CONCLUDING REMARKS We have now in our possession the geodesic equations for generic Kerr spacetime as well as for the associated NHEK and near-NHEK geometries, respectively given by the sets of equations (<ref>), (<ref>) and (<ref>). These equations are all parametrized only in terms of constants quantities along the geodesics (energy, angular momentum and Carter constant) and of initial data of the motion. Another remarkable fact to notice about these equations is that the radial and the polar motion are totally decoupled from each other. One can solve independently for both, before injecting the obtained solutions into the two remaining equations for obtaining the time and azimutal behaviour of the trajectory. This is also the reason why the polar and the radial part of the motion can be classified independently, as we will see in Chapter <ref>. CHAPTER: HAMILTONIAN DESCRIPTION OF GEODESIC MOTION Before turning to the classification of Kerr geodesics, we will re-derive the generic Kerr geodesic equations (<ref>) from another perspective, namely that of Hamiltonian mechanics. Even if it might seems – at first glance – redundant with the previous chapter, this analysis will turn out to be of prime importance for the continuation of this work. The goal of this chapter is actually threefold: first, it will enable us to review the tools from Hamiltonian mechanics that will be extensively used in Part <ref> of the thesis in a computationally simpler framework. Second, it will give us new perspectives on the results discussed so far. For instance, the resolvable character of geodesic equations in Kerr is a direct consequence of the fact that they form a completely (Liouville) integrable system, as can be seen directly within the framework of Hamiltonian mechanics. Finally, it will allow us to derive some practically insightful results: the action-angle formalism will be used to make explicit the tri-periodicity of Kerr (radially bounded) geodesic motion, and to compute its associated fundamental frequencies. This machinery reveals to be of prime importance in moderns developments, since it lies at the heart of perturbative schemes for describing the evolution of extreme mass ratio inspirals, such as two-timescale expansion <cit.>. Each section of this chapter first review the necessary concepts in an abstract way, before applying them to the problem of geodesic motion in Kerr spacetime. § HAMILTONIAN DESCRIPTION OF GEODESIC MOTION IN KERR SPACETIME §.§.§ Basic of Hamiltonian mechanics We recall the basics of the geometrical (symplectic) formulation of Hamiltonian mechanics <cit.>. The phase space of an Hamiltonian system possessing N degrees of freedom is represented by a 2N differentiable manifold ℳ equipped with a non-degenerated two-form Ω (the symplectic form) which is closed (Ω=0). In this framework, the Hamiltonian H is defined as a smooth function on ℳ. Hereafter, we will always consider time-independent Hamiltonians (i.e., conservative systems). The evolution of the system through its phase space is given by the integral curves of the Hamiltonian vector field v^𝔦≜Ω^𝔦𝔧∇_𝔧 H. To enable specific computations to be carried out, we have introduced some coordinates (x^𝔦,p_𝔦) on the phase space. Here, 𝔦,𝔧,…∈1,…,N stand here for phase space indices. Coordinates are said to be canonical if Ω= p_𝔦∧ x^𝔦. Prescribing a symplectic structure on phase space is equivalent to providing the algebra of Poisson brackets ·· between all the (independent) coordinates. For canonical coordinates, one has x^𝔦p_𝔧=δ^𝔦_𝔧 and the Poisson brackets between two phase space functions f and g are fg≜∑_𝔦=1^Nfq^𝔦gp_𝔦-fp_𝔦gq^𝔦. Finally, the time evolution of any phase space quantity f is given in term of Poisson brackets by Ḟ≜Fτ=FH+Fτ. In particular, F is a constant of motion provided that FH=0. Constants of motion are often called first integrals of motion within the framework of Hamiltonian mechanics. §.§.§ Constrained Hamiltonian systems This is all for the basic tools we need from Hamiltonian mechanics. However, there is a subtlety that arises when building Hamiltonians for describing geodesic motion, and that our analysis shall account for: the presence of constraints. This subject is very technical, and we only bring here the elements useful for our purposes. The curious reader will consult fruitfully the book of M. Henneaux and C. Teitelboim <cit.>. The appearance of constraints is more naturally introduced by starting from the Lagrangian viewpoint: consider a theory described by the Lagrangian action S_L[q^𝔦]=∫ L(q^𝔦,q̇^𝔦,τ)τ. The classical equations of motion are the Euler-Lagrange equations, which correspond to the extrema of S_L. They can be written in the enlightening way q̈^𝔧Lq̇^𝔦q̇^𝔧=Lq^𝔦-q̇^𝔧Lq̇^𝔦q^𝔧. The starting point for switching to the Hamiltonian formulation is to define the conjugate momenta as p_𝔦≜Lq̇^𝔦. This relation can be inverted to express the velocities in terms of the positions and the momenta provided that Lq̇^𝔦q̇^𝔧≠ 0. Notice that this is the very same condition that allows to invert Eq. (<ref>) in order to express the accelerations uniquely in terms of the velocities and the positions. However, in many situations, this will not be the case and Eq. (<ref>) will be vanishing. In that case, Eq. (<ref>) cannot be inverted. In others words, this means that the momenta are not all independent, but subjected to some constraints ϕ_A(q,p)≈ 0, A=1,…,M. Here, M is an integer labelling the number of linearly independent constraints, and we use capital letters for indices running over the set of constraints. These constraints are referred to as primary constraints, since they directly arise from the non-invertibility of the momentum-velocity relation. Notice that we use the weak equality symbol ≈ to emphasize that the constraints are not identically vanishing on phase space, but that they shall be enforced to be numerically vanishing. We call the submanifold of phase space defined by the constraints Eq. (<ref>) the primary constraint surface. More generally, we say that two phase space functions F and G are weakly equal if they are equal on the primary constraint surface. This is denoted F≈ G. At the level of the equations of motion, the vanishing of Eq. (<ref>) directly implies that one cannot express the accelerations solely in terms of the velocities and the positions. The EOMs solutions can then potentially contain arbitrary functions of time, which is the very signature of the presence of a gauge freedom in the theory. The Hamiltonian is defined from the Lagrangian through the Legendre transformation H= q̇^𝔦p_𝔦-L. Hamilton equations as well as the set of constraints (<ref>) can be obtained by varying the Hamiltonian action S_H[q^𝔦,p_𝔦,u^A]=∫(q̇^𝔦p_𝔦-H-u^Aϕ_A)τ. Here, the functions u^A take the role of Lagrange multipliers, and ensure the Legendre transformation to be invertible. The evolution equations for any phase space function F can be conveniently written as Ḟ=FH+u^AFϕ_A. Notice that all of this is equivalent to the reduced variational principle (with fewer variables) where the constraints have already been enforced, defined from S_R[q^𝔦̂,p_𝔦̂]=∫(q̇^𝔦̂ p_𝔦̂-H)τ and subjected to the regularity conditions ϕ_A=0, δϕ_A=0, ∀ A∈ 1,…,M. All this discussion only makes sense if the constraints are preserved under time evolution. This is the case provided that ϕ̇_A=ϕ_AH+u^Bϕ_Aϕ_B!≈0. This equation will automatically be satisfied if ϕ_AH≈ 0, ϕ_Aϕ_B≈ 0, ∀ A=1,…,M. Such constraints are known as first class constraints. They are the only one that we will encounter in the continuation of this work. Since they are preserved under time evolution, these constraints can either be resolved at the level of the action or at the one of the equations of motion. Stated differently, one can replace the full phase space by the primary constraint surface, still using the same Poisson brackets than the ones defined on the original phase space. In this setup, the Lagrange multipliers u^A are totally arbitrary functions. §.§.§ Constraints and gauge transformations We will now discuss the strong link between first class primary constraints and gauge transformations, that is, transformations that do not alter the physical state of the system. Consider the evolution of some dynamical quantity F from a time τ_1 to a time τ_2=τ_1+δτ. Since the Lagrange multipliers are arbitrary, one can make two different choices u^A and ũ^A at time τ_1. We let the system evolve up to time τ_2. The difference in its evolution – corresponding to the two choices of multipliers – is <cit.> δ F=δ u^AFϕ_A, δ u^A≜(u^A-ũ^A)δτ. The ambiguity in the evolution of F is then physically irrelevant, since it is proportional to the physically irrelevant quantity δ u^A. The physical state of the system at time τ_2 is therefore not affected by the transformation Eq. (<ref>). Importing the language of field theory, we say that the first class primary constraints generate gauge transformations. §.§.§ Application to geodesic motion Let us go back to the problem of geodesic motion. For the sake of simplicity, we shall only consider timelike geodesics. They are the curves that extremize the proper distance S_L[x^μ]=-μ∫√(-ẋ_μẋ^μ)λ, ẋ^μ≜x^μλ, with λ an arbitrary time parameter and μ>0 the mass of the particle. This problem is equivalently described by a N=4 Hamiltonian system. The phase space is spanned by (x^μ,p_μ) (notice that the phase space indices 𝔦,… are now replaced by the spacetime indices μ,…), with p_μ=Lẋ^μ=μ/√(-ẋ_αẋ^α)ẋ_μ. The action Eq. (<ref>) is invariant under arbitrary time reparametrizations λ→σ(λ). From Eq. (<ref>), we see that it creates a dependence between the momenta, since p_μ p^μ=-μ^2. This leads to the existence of the so-called mass shell constraint ℋ≜ p_μ p^μ+μ^2≈ 0. The Hamiltonian is then obtained through the Legendre transformation H=ẋ^μ p_μ-L=x^μ p_μ+μ√(ẋ_μẋ^μ). Contracting both sides of Eq. (<ref>) with v^μ, we get the identity ẋ^μ p_μ=-μ√(ẋ_μẋ^μ)=L. We are therefore left with an identically vanishing Hamiltonian H=0, and the Hamiltonian action only contains the constraint, S_H[x^μ,p_μ,u]=∫(ẋ^μ p_μ-uℋ)λ with u a Lagrange multiplier. Moreover, it is easy to show that ℋ is indeed a first class constraint. Despite this Hamiltonian formulation is very elegant, having an identically vanishing Hamiltonian is inadequate for some of our subsequent purposes, e.g. for studying the Hamilton-Jacobi equation. We therefore tackle the analysis from another viewpoint, namely by breaking from the start the reparametrization invariance. It is a textbook level statement <cit.> that the stationary points of the action (<ref>) are equivalent to the stationary points of S̅_L[x^μ]=μ/2∫ẋ_μẋ^μτ with τ being the proper time along the geodesic. The conjugate momenta are now p_μ=μẋ^μ. Since the choice of τ as the proper time enforces the equality ẋ_μẋ^μ=-1 to hold, the constraint Eq. (<ref>) is still present. However, the new Hamiltonian is non vanishing since H̅=ẋ^μ p_μ-μ/2ẋ_μẋ^μ=1/2μp_μ p^μ. Expliciting all the dependencies, we get H̅(x^α,p_α)=1/2μg^μν(x^α)p_μ p_ν. Hamiltonian for geodesic motion Again, the constraint ℋ is first class. Evaluating Eq. (<ref>) on the primary constraint surface gives simply H̅≈-μ/2, and the on-shell value of the Hamiltonian is thus simply (minus one half of) the mass of the particle. Eq. (<ref>) will turn on to admit a simple generalization for extended test bodies, as will be described in Chapter <ref>. §.§.§ Hamilton and geodesic equations Let us convince ourselves that we have indeed worked out in a fancy way the geodesic equations, whose standard form is [2]x^μτ+Γ^μ_αβx^ατx^βτ = 0. A straightforward computation shows that Hamilton equations read x^ντ = g^νσp_σ/μ, p_ντ = -1/2μgσρ,νp_σ p_ρ. Eq. (<ref>) is clearly a direct consequence of the definition of the impulsion, but Eq. (<ref>) requires more work. Taking the τ-derivative of the 4-momentum p_ν we get p_ντ = τ(μ g_μνx^μτ) = μ g_μν,αx^μτx^ατ + μ g_μν[2]x^μτ = μ g_μν,αx^μτx^ατ + μ g_μν[-1/2g^μα(g_σα,ρ + g_ρα,σ - g_ρσ,α)x^ρτx^στ] = 1/2μg_σρ,νp^σ p^ρ = -1/2μgσρ,νp_σ p_ρ. In the third line we have made use of the geodesic equation (<ref>), while in the last step we have used p^σ p^ρ g_σρ,ν = -p_σ p_ρ gσρ,ν. This identity can be proven starting from the following p^σ p^ρ g_σρ,ν = p^σ p^ρ(g^αβg_ασg_βρ)_,ν = p_α p_β gαβ,ν + p^β p^ρ g_βρ,ν + p^σ p^α g_ασ,ν. Now, renaming the indices (α→β, σ→ρ) and using the symmetry of the metric, the last two terms on the right-hand side are equal p^σ p^ρ g_σρ,ν = p_α p_β gαβ,ν + 2p^β p^ρ g_βρ,ν. Rearranging the terms we obtain the result -p^σ p^ρ g_σρ,ν = p_α p_β gαβ,ν. Gathering all the pieces leaves us with the second Hamilton equation (<ref>) and concludes the proof. §.§.§ Coordinate-time Hamiltonian Actually, the Hamiltonian Eq. (<ref>) is a generally covariant Hamiltonian, since the time evolution is driven by an exterior parameter τ and not by the coordinate time t itself. This formulation presents the advantage of being more symmetric with respect to all spacetime coordinates. However, expressing the dynamics in terms of the time coordinate t presents advantages in some situations, e.g. for performing post-Newtonian expansions. In doing so, we will work directly with the reduced action Eq. (<ref>), thus reducing our phase space to be spanned only by (x^i,p_i) with i=1,2,3. Let us choose the timelike coordinate such that x^0=t, and also take this t to be the time parameter of the system. We therefore get ẋ^0=tt=1. Starting again from the reparametrization invariant Lagrangian action Eq. (<ref>) and using Eq. (<ref>), we obtain the Hamiltonian through the Legendre transformation H=ẋ^i p_i-L=ẋ^ip_i-ẋ^μ p_μ=-p_0. Notice that, since we work on the reduced phase space, we have only to sum on the spatial indices in the Legendre transformation. The last step is to rewrite p_t by solving the constraint ℋ≈ 0. We write the metric in ADM coordinates <cit.> g^μν=(g^tt g^ti g^tj g^ij) ≜(-1/α^2 -β^i/α^2 -β^j/α^2 γ^ij-β^iβ^j/α^2), g_μν=(-α^2+β_iβ^i -β_i -β_j γ_ij), where we respectively defined the lapse, the shift and the spatial metric as: α≜(-g^tt)^-1/2, β^i≜g^ti/g^tt, γ^ij≜ g^ij-g^tig^tj/g^tt. One has γ^ijγ_jk=δ^i_k and β^i≜γ^ijβ_j. Assuming that p^t∝x^0λ is future-oriented, the constraint Eq. (<ref>) yields p_t≈ -β^ip_i-α√(μ^2+γ^ijp_ip_j). We therefore end up with the Hamiltonian H≈β^ip_i+α√(μ^2+γ^ijp_ip_j). § COMPLETE (LIOUVILLE) INTEGRABILITY We now turn to the discussion of integrability. Remember that An Hamiltonian system is completely integrable (or Liouville integrable) in some open set 𝒰⊂ℳ provided that there exists N linearly independent first integrals of motion P_i=(P_1=H,P_2,…,P_N) that are linearly independent and in involution, P_iP_j=0 ∀ i,j=1,…,N at each point of 𝒰. Notice that since HH=0, the Hamiltonian is automatically a first integral, and we set P_1=H by convention. Liouville-Arnold theorem for integrable systems <cit.> states that a completely integrable system exhibits the following features: * Its phase space is foliated by level sets ℳ_𝐩≡x∈ℳ | P_i(x)=p_i | i=1,…,N which correspond to surfaces of constant P_α and are invariant under the Hamiltonian flow; * Its Hamilton equations can be integrated by quadratures, that is by a finite number of integrations and algebraic operations; * Finally, in the case of compact and connected level sets, one can switch to action-angle variables, see Section <ref> for a more precise description. In the case of geodesic motion in Kerr spacetime, the phase space is eight dimensional (that is, N=4). A set of four independent first integrals turns out to be the four conserved quantities studied in Chapter <ref> (recall that H≈-μ/2, we can thus choose arbitrarily one of these two quantities). We set P_α=(H,E_0,L_0,K_0). Even if we already know that the last three quantities Poisson commute with H (since they are constants of motion), we shall verify it as a consistency check. Using the identity (which comes from the metric compatibility condition) ∂_ν g^αβ=-2Γ^(α_νλg^β)λ, we have HE_0 =-1/2μ(∂_λ g^αβξ^μx^λp_μ+2g^αβ p_αp_βx^λ∂_λξ^μ) =1/μ∇_μξ_ν p^μ p^ν =0, since ξ^μ is a Killing vector field. We use the exact same reasoning for the two others quantities to get HL_0=-1/μ∇_μη_ν p^μ p^ν=0, HK_0=-1/μ∇_μ K_νρp^μ p^ν p^ρ=0. It remains to show that E_0,L_0 and K_0 are mutually Poisson commuting. One has E_0L_0 =-ξ^μ∂_μη^ν p_ν+η^μ∂_μξ^ν p_ν=0, since ∂_μξ^ν=∂_μη^ν=0 are direct consequences of the explicit expressions Eq. (<ref>) of the Killing vectors. Moreover, E_0K_0 =ξ^μ∂_μ K^αβ p_α p_β-2 p_μ∂_αξ^μ K^αβ p_β=0. The last equality comes from the fact that the second term of the RHS is vanishing for the same reason used in the previous computation, while the vanishing of the first term arises from the stationarity of Kerr background, which ensures that ξ^μ∂_μ K^αβ=∂_t K^αβ=0. The computation of the last bracket L_0K_0 is identical, except that one shall use the consequence of Kerr axial symmetry, η^μ∂_μ K^αβ=∂_φ K^αβ=0. At the end of the day, we have formally proven that Kerr geodesic equations do form a completely integrable system. Notice that it is also the case for the (near-)NHEK spacetimes. We therefore have all the properties expected from Liouville-Arnold theorem. In particular, the equations of motion have already been formally integrated in the previous chapter. Section <ref> will discuss action-angle formulation of geodesic motion. As we will see, this will require some care, since level sets of Kerr geodesic motion are unbounded in the time direction. § HAMILTON-JACOBI EQUATION The Hamilton-Jacobi formulation of Kerr geodesic motion has been famously studied by Carter in the late sixties <cit.>, and enabled him to discover the Carter constant. We reproduce this computation here, because it is the simplest way to reduce Kerr geodesic equations to the set of four first order ODEs Eqs. (<ref>). Let us first remind ourselves the Hamilton-Jacobi formulation for a time independent Hamiltonian system, H(x^i,p_i,τ)=H(x^i,p_i). In this case, H is constant along the motion, and we denote α its constant value: H(x^i,p_i)=α. For such a system, Hamilton's equations are equivalent to the Hamilton-Jacobi equation H(x^i,Sx^i)+Sτ=0. where S(x^i,α,τ) is referred to as Hamilton's principal function. Because the Hamiltonian is time independent, it necessarily takes the form S(x^i,α,τ)=W(x^i,α)-ατ where W(x^i,α) is known as Hamilton's characteristic function. Finally, we notice that the conjugate momenta are given by the partial derivatives of the Hamilton's principal/characteristic function with respect to the coordinates, p_i=Sx^i=Wx^i. In many practical cases, this formulation is very powerful to separate the equations of motion in a set of independent ODEs <cit.>. Let us go back to geodesic motion in Kerr spacetime. Denoting[The notation W^(0) will become meaningful in Chapter <ref>, when we will study the Hamilton-Jacobi equation for spinning test bodies as a perturbation of the geodesic Hamilton-Jacobi problem.] W^(0)≜ W/μ and noticing that α=-μ/2 in our case, the Hamilton-Jacobi equation reduces to g^μνW^(0)_,μW^(0)_,ν+1=0. We assume that it admits a separated solution of the form W^(0)=-E_0 t+L_0φ+w_0r(r)+w_0z(z), where E_0 and L_0 are (for now arbitrary) constants and w_0r(r), w_0z(z) functions that remain to be determined. Notice that we work here with z≜cosθ instead of the Boyer-Lindquist polar angle θ itself, because it allows to save a lot of computation time for computer-assisted symbolic checks of our equations (no trigonometric functions being involved). Moreover, Eq. (<ref>) implies that p̂_t =W^(0)t=-E_0, p̂_φ=W^(0)φ=L_0 and comparing these results with the discussion of Section <ref>, we directly see that E_0 and L_0 are the quantities conserved along geodesics associated to Kerr's timelike and axial Killing vectors, thus explaining the notation chosen. Using the explicit expression of the inverse Kerr metric (<ref>), the Hamilton-Jacobi equation reads 1-E_0^2+L_0^2/(r^2+a^2)(1-z^2)-2Mr[E_0(r^2+a^2)-aL_0]^2/Δ(r)Σ(r,z)(r^2+a^2) +Δ(r)/Σ(r,z)(w'_0r)^2+1-z^2/Σ(r,z)(w'_0z)^2=0, where ' indicates a derivative of a single-variable function with respect to its argument. A bunch of algebraic manipulations allows to show that this equation can be separated as (1-E_0^2)r^2+L_0^2a^2/r^2+a^2-2Mr[E_0(r^2+a^2)-aL_0]^2/Δ(r)(r^2+a^2)+Δ(r)(w'_0r)^2 =-[(1-E_0)^2a^2z^2+L_0^2/1-z^2+(1-z^2)(w'_0z)^2]. Since this equation is separated, the two sides of the equality shall be equal to a constant, that we will denote -Q_0-L_0^2. Another bunch of straightforward algebra then allows to turn the Hamilton-Jacobi equation in the two following ODEs: (1-z^2)(w'_0z)^2 =Q_0-z^2[a^2(1-E_0^2)+L_0^2/1-z^2], Δ^2(r)(w'_0r)^2 =-(K_0+r^2)Δ(r)+P_0^2(r). Remind that we are using the shortcut notations K_0≜ Q_0+(L_0-aE_0)^2, P_0(r)≜ E_0(r^2+a^2)-aL_0. Solutions for the first order derivatives of the action with respect to r and z can now be written as w'_0r(r) =±_r√(R(r))/Δ(r), w'_0z(z)=±_θ√(Θ(z^2))/1-z^2 with the potentials R(r) and Θ(z^2) defined in Eqs. (<ref>) and (<ref>). Moreover, since w'_0z=p̂_θ, the constant Q_0 takes the value Q_0=p̂_θ^2+cos^2θ[a^2(1-E_0^2)+(L_0/sinθ)^2], which is in agreement with the value of Carter constant found in the previous chapter. However, Q_0 appears here as the separation constant of the geodesic Hamilton-Jacobi equation in Kerr spacetime, as in the original derivation of Carter <cit.>. The final step is to recover geodesic equations (<ref>) from our separated solution to the Hamilton-Jacobi equation. Using Eq. (<ref>), we notice that x^μτ̃=g^μν W^(0)_,ν. This yields tτ̃ =-g^ttE_0 + g^tφL_0, rτ̃=g^rrw'_0r(r), zτ̃ =g^zz w'_0z(z), φτ̃=- g^φ tE_0+g^φφL_0. Replacing the derivatives of w_0r,z by the solutions Eq. (<ref>) and using the components of the inverse metric Eq. (<ref>), we find that these equations are actually the geodesic equations in the form (<ref>), as anticipated. § ACTION-ANGLE FORMULATION We now turn to the action-angle formulation of the geodesic motion. The philosophy is very simple: we will switch to a new set of variables (x^μ,p_μ)→(q^μ,J_μ) in which the geodesic (that is, Hamilton) equation take the simple form q^μτ=Ω^μ(J_α), J_μτ=0, which thus make explicit the periodicity of the system with respect to some angle variables q^μ. Here, the action variables J_μ are constants, and the fundamental frequencies given by Ω^μ only depend upon them. Moreover, as we will see, this machinery comes with a prescription for computation the fundamental frequencies from the Hamiltonian. The main advantage of this formulation relies on the fact that it allows – for radially bounded geodesics – to express any dynamical quantity f(τ) in a Fourier series <cit.> f(τ) =∑_ k∈ℤ^3f_ke^iΩ_kτ=∑_ k∈ℤ^3f_ke^i k·Ω, with k·Ω=Ω_kτ=k_rΩ_r+k_θΩ_θ+k_φΩ_φ. The dynamics is thus entirely specified by a discrete number of Fourier coefficients, q_k≜1/(2π)^3∫_[0,2π]^3[3]Ω f( k·Ω/Ω_k)e^i k·Ω. This is this very fact that makes action-angle formalism to lies at the heart of the two timescale expansion scheme for describing self-forced motion around a Kerr black hole <cit.>. Even if it appears conceptually simple, the rigorous derivation of action-angle variables for geodesic motion in Kerr spacetime is quite technical, the main difficulty arising from the fact that the level sets foliating the phase space are non-compact in the time direction, thus forbidding to use Liouville-Arnold theorem directly. We will proceed in three steps: (i) introduce the ideas behind action-angle formulation on the simplest example possible, a N=1 Hamiltonian system; (ii) discuss the generalization of Liouville-Arnold theorem needed for dealing with non-compact level sets and finally (iii) apply this formalism to obtain action variables and fundamental frequencies in Boyer-Lindquist coordinates. All along this section, we will remain very pictorial and skip most of the lengthy computations as well as too subtle technical details, our aim being to give a pedagogical overview of the subject and to obtain the explicit expressions of the fundamental frequencies. The interested reader may refer to <cit.> for more detailed developments. §.§ A first glance at action-angle variables Let us expose the philosophy of action-angle formalism on the simplest example possible. Consider a one dimensional conservative Hamiltonian system described by the conjugated variables (x,p), which is periodic in x (either librating or rotating). Turning to Hamilton-Jacobi description, we notice that the associated Hamilton principal function possesses the right structure to be the generating function of a type II canonical transformation (x,p)→(X,α), α being the constant value of the Hamiltonian. One has the usual relations for this kind of transformation: p=W(x,α)x_α=cst, X=W(x,α)α_x=cst. Going from the original variables (x,p) to the action-angle ones (q,J) is achieved through a sequence of two canonical transformations, (x,p)→(X,α)→(q,J). The first one is the type II canonical transformation mentioned above, whereas the second transformation is performed by introducing a new variable J (the action variable) to replace α as the new (constant) momentum. This variable is defined as J≡1/2π∮ p x. Here, ∮ designs the integration over a complete period of the motion. One has α=H=H(J), W=W(x,J). The angle variable q is then defined as being the generalized coordinate conjugated to J: q≡WJ_x=cst. The main gain obtained using action-angle formalism is that Hamilton's equations take the simple form qt=Ω(J), Jt=0, where Ω(J)≡H(J)J. They are trivially solved by q(t)=Ω(J) t+β, J(t)=constant, with β being an arbitrary integration constant. Moreover, the frequencies associated with the periodic motion appear explicitly. Namely, the change in q over a whole period Δτ of the motion is Δ q≡∮qx x=∮[2]Wq J q=J∮Wx x=J∮ p x=2π. It is then straightforward to notice that Ω=2π/Δτ. Consequently, Ω can be interpreted as the fundamental frequency of the periodic motion, and Eq. (<ref>) gives a prescription for computing the system's fundamental frequency from the Hamiltonian expressed in terms of the action variable J. §.§ Generalized action-angle variables for non-compact level sets Our subsequent goal is to generalize this analysis up to a level enabling the treatment of Kerr geodesic motion. The relevant generalization will take the form of a generalized version of the Liouville-Arnold theorem, discussed in Section <ref>. Recall that Liouville-Arnold theorem states that, for any system completely integrable in the neighbourhood of some compact and simply connected level set, one can define symplectic coordinates exhibiting the same features than the action-angle variables introduced above. Moreover, a prescription for defining them explicitly in a coordinate-invariant way can be formulated. However, this result cannot be applied directly to Kerr geodesic motion, because, in this case, the phase space is not bounded in the time direction, leading to the non-compactness of the level sets. Nevertheless, a generalization of the Arnold-Liouville result to systems possessing non-compact level-sets has been provided by Fiorani, Giachetta and Sardanashvily in 2003 <cit.>. Formally, the statement is the following: Let be an Hamiltonian system possessing N degrees of freedom and completely integrable in 𝒰⊃ℳ_𝐩 for which the vector fields v^a_α≡Ω^ab∇_bP_α are complete on 𝒰 and such that the level sets ℳ_𝐩 foliating 𝒰 are all diffeomorphic one to another. Then: * ∃ k∈ℕ, with 0≤ k≤ N, such that ℳ_𝐩 is diffeomorphic to T^k×ℝ^N-k (with T^k the k-torus). Moreover, there exists some open set 𝒱⊃ℳ_𝐩 diffeomorphic to T^k×ℝ^N-k×ℬ, with ℬ being an open ball. * There exist symplectic coordinates (q^α,J_α) (α=1,…,N) whose first k variables q_α are 2π-periodic (q_α+2π≡ q_α) and for which the first integrals P_α can be expressed as functions of the J_α only: P_α=P_α(J_1,…,J_N). The coordinates (q^α,J_α) are called generalized action-angle variables. This theorem comes with a prescription for computing explicitly the action variables J_α. Because the symplectic form is closed, there exists, in some neighbourhood of ℳ_𝐩, a one-form Θ (the symplectic potential) defined through Θ=Ω. The set of inequivalent closed paths in the level set ℳ_𝐩 are given by a set of generators γ_1,…,γ_k of the fundamental homotopy group of ℳ_𝐩, Π_1(ℳ_𝐩)≃(ℤ_k,+_ k). One then defines the generalized action variables as J_α≡1/2π∮_γ_αΘ, α=1,…,k. This definition turns out to be independent of the choice of the symplectic potential and of the generators. Moreover, this prescription is unique up to the redefinition of the origin of the angle variables q_α→ q_α+Z(J_β)J_α, J_α→ J_α with Z an arbitrary function of the action variables, and up to transformations of the form q_α→ A_αβq_α, J_α→ B_αβJ_β where A_αβ, B_αβ are constant real matrices with A_αβB_αγ=δ_βγ such that the J_α's are left invariant for α=1,…,k. In this generalized action-angle formulation, Hamilton's equations take the simple form q_αt=Ω_α(𝐉), J_αt=0 with Ω_α(𝐉)≡H(𝐉)J_α. Their solutions read q_α(t)=Ω_α(𝐉_0)t+q_α 0, J_α(t)=J_α0. For α=1,…,k, the functions Ω_α(𝐉) are interpreted as the angular frequencies of the motion. §.§ Action-angle formulation of the geodesic motion in Kerr Given the results of the previous sections of this chapter, the generalized Arnold-Liouville theorem with k=3 and N=4 applies to radially bounded Kerr geodesic motion. The generators of Π_1(ℳ_𝐩) can be constructed directly from the periodic motion x^i(τ)=(r,θ,φ) in Boyer-Lindquist coordinates <cit.>. The ambiguity (<ref>) in the definition of the action variables takes the form J_t→γ J_t+v^iJ_i, J_i→ J_i. Here, γ∈ℝ and v^i∈ℝ^3 are arbitrary constants. It can be resolved by setting J_t≡1/2π∮_γ_tΘ, with γ_t being a 2π-lenght integral curve of the extension to ℳ of the timelike Killing vector field ∂_t <cit.>. As shown in <cit.>, this definition is independent of the curve γ_t chosen. This scheme can be shown to realize a coordinate-invariant definition of generalized action-angle variables (q^t,q^r,q^θ,q^φ,J_t,J_r,J_θ,J_φ) for bounded geodesic motion in Kerr spacetime. This definition is unique up to the (trivial) residual ambiguity q^α→ q^α+Z(𝐉)J_α, J_α→ J_α. We will now explicit the action variables J_α and the frequencies Ω_α in Boyer-Lindquist coordinates. A detailed computation can be found in <cit.> and is reviewed in <cit.>. Using the symplectic potential Θ=p_μ x^μ, the action variables can be written in terms of the first integrals P_α=(H,E_0,L_0,Q_0) as J_t =1/2π∫_0^2πp_t t=-μ E_0, J_r=1/2π∮√(R(r))/Δ(r) r J_θ =1/2π∮√(Θ(cos^2θ))θ, J_ϕ=1/2π∮ p_ϕϕ=μ L_0. Expressing the frequencies is slightly more cumbersome. One defines the integrals W ≜∫_r_1^r_2r^2E_0(r^2+a^2)-2Mra(L_0-aE_0)/Δ(r)√(R(r)) r, X≜∫_r_1^r_2 r/√(R(r)), Y ≜∫_r_1^r_2r^2/√(R(r)) r, Z≜∫_r_1^r_2r[L_0r-2M(L_0-aE_0)]/Δ(r)√(R(r)) r with r_1,2 the two turning points of the radial motion (i.e. the two largest roots of R(r)). One also denotes z_± the roots of Θ(z) (z_-<z_+) with z≜cos^2θ. Defining β^2≜ a^2(1-E_0^2) and k≜√(z_-/z_+), one has Ω_t =K(k)W+a^2z_+E_0[K(k)-E(k)]X/K(k)Y+a^2z_+[K(k)-E(k)]X, Ω_r =πK(k)/K(k)Y+a^2z_+[K(k)-E(k)]X, Ω_θ =πβ√(z_+)X/2/K(k)Y+a^2z_+[K(k)-E(k)]X, Ω_ϕ =K(k)Z+L_0[Π(z_-,k)-K(k)]X/K(k)Y+a^2z_+[K(k)-E(k)]X. Fundamental frequencies of Kerr geodesic motion Here K, E and Π are the standard Legendre forms of the complete elliptic integrals. We use the conventions of <cit.>, which are reviewed in Appendix <ref>. The most important take home message is that the action variables J_α are only functions of the first integrals of motion P_α, J_α=J_α(P_β) (see Eq. (<ref>) for the explicit expressions). In terms of these generalized action-angle variables, the geodesic equations take the simple form J_ατ = 0, q_ατ = ω_α(J_β). These equations can be easily integrated to obtain J_α(τ) = J_α(0), q_α(τ) = q_α(0) + ω_α(J_β)τ . The constancy of the action variables is a direct consequence of the conservation of the first integrals of the geodesic motion in Kerr spacetime. CHAPTER: CLASSIFICATION OF KERR GEODESICS This Chapter will be devoted to the classification of Kerr polar motion and (near-)NHEK radial motion, relying on the equations of motion obtained in Chapter <ref>. As noticed before, radial and polar geodesic equations in these three spacetimes are decoupled one from another, and can thus be studied independently. Moreover, they both take the form yλ=±_y√(V(y)), with V(y) some polynomial effective potential. Consequently, the study of the generic features of the motion (existence and location of turning points…) reduces mainly to the study of the roots and the sign of the potential V(y) for all the possible values of the parameters upon which it depends. We will also provide explicit solutions to the geodesic equations for each possible type of motion, thus verifying in a concrete way one of the main consequences of complete integrability of the system. In this chapter, we will study Kerr polar motion in full generality, but leave the radial motion aside. Its study is actually much more involved, since the associated potential is of degree four, whereas it is only of degree two for the polar motion. Instead, we will analyze in details the radial geodesic motion in both NHEK and near-NHEK spacetimes. This analysis turns out to be relevant for at least three reasons: (i) it will provide us with a deep comprehension of the peculiarities of the motion near highly spinning black holes, and in particular of the behaviour of their spherical geodesics. (ii) As we will see, any geodesic can be obtained by applying a well-chosen conformal transformation to a spherical geodesic. This enables to use the computational scheme extensively applied in <cit.> to generate the waveforms emitted by objects moving along geodesics in the near-horizon region. Such an analysis can allow to unravel “smoking gun” signatures for the existence of highly spinning black holes in Nature. Finally, (iii) this radial classification was also the work which paved the road to the one of generic Kerr radial motion, which was completed by G. Compère, Y. Liu and J. Long <cit.>. All along this chapter, we will adopt a different notation than the one used in all the rest of this thesis for Kerr geodesic conserved quantities. The main reason for this switch is to match with the conventions used in <cit.>. We also take the opportunity to restore the dependence of the equations upon μ. This enables our classification scheme to hold for null geodesics as well as for timelike ones. The temporary convention is given by the following dictionary t̂→ t, ,r̂→ r ,φ̂→ϕ, E_0→Ê/μ, L_0→ℓ/μ, L_*→ℓ_*/μ, L_∘→ℓ_∘/μ, Q_0→Q/μ^2. § CLASSIFICATION OF POLAR MOTION This section aims to describe in full detail the classification of the polar geodesic motion around a Kerr black hole. The classification scheme used has been first introduced in <cit.>, and subsequently completed in <cit.>. The taxonomy is depicted in Figs. <ref> and <ref>. The phenomenology of the polar behavior is principally governed by the sign of Q. The details of the classification are summarized in Tables <ref> and <ref>. Let us recall that, due to mirror symmetry between the two hemispheres, the natural variable to describe the polar motion is z≜cos^2θ. Let us define ϵ _0(Ê,μ) ≜ a^2 (Ê^2 - μ^2), and z_±≜Δ_θ±sign ϵ _0√(Δ_θ^2+Q/ϵ _0), Δ_θ≜1/2(1-Q+ℓ^2/ϵ _0), z_0≜Q/Q+ℓ^2. The classification is determined by the roots of the polar potential (<ref>). Assuming a ≠ 0, we can rewrite it as Θ(z)=-ℓ^2 z+(Q+ ϵ _0 z)(1-z)= {[ ϵ _0(z_+-z)(z-z_-), ϵ _0≠ 0 ;; (Q+ℓ^2)(z_0-z), ϵ _0=0. ]. Our definition of the roots z_± implies the ordering z_-<z_+ (and respectively z_+<z_-) for ϵ _0>0 (respectively ϵ _0<0). This is a convenient convention because in both cases the maximal angle will be related to z_+. The positivity of the polar potential implies that the poles z=1 (θ=0,π) can only be reached if ℓ=0. Note that when a geodesic crosses a pole, its φ coordinates discontinuously jump by π. The invariance of the polar geodesic equation under (Ê,ℓ)→(-Ê,-ℓ) allows us to reduce the analysis to prograde ℓ≥ 0 orbits. We distinguish the orbits with angular momentum ℓ≠ 0 and without, ℓ = 0: I. Nonvanishing angular momentum ℓ≠0. We must consider the following cases: * -(ℓ-aÊ)^2≤ Q<0 can only occur if ϵ _0>0, otherwise leading to Θ<0. For ϵ _0>0, the motion is vortical; i.e., it takes place only in one of the two hemispheres without crossing the equatorial plane and is bounded by 0<z_-≤ z≤ z_+<1. This vortical motion can only occur provided ℓ^2≤(√(ϵ _0)-√(-Q))^2. * Q>0 leads to motion crossing the equator and symmetric with respect to it, bounded by 0 ≤ z ≤ z_+<1 ( ϵ _0≠0), 0 ≤ z ≤ z_0<1 ( ϵ _0=0). We will refer to such a motion as pendular; * Q=0 allows us to write Θ(z)= ϵ _0 z (1-ℓ^2/ϵ _0-z). If ϵ _0≤ 0, the positivity of the polar potential enforces the motion to be equatorial. For ϵ _0≥ 0, equatorial motion exists at z=0. For ϵ _0≥ 0 and ℓ^2 ≤ϵ _0, another motion exists bounded by 0 < z ≤ 1-ℓ^2/ϵ _0<1, which is a marginal case separating the pendular and vortical regimes; the motion then admits only one turning point and asymptotes to the equator both at future and at past times. Since we could not find a terminology for such a motion in the literature, we propose to call it equator-attractive[This neologism accurately reflects the fact that the motion is polar and that the equator is an attractor. The terminology “homoclinic” is already used in the literature to refer to radial motion.]. In the special case where z=0 at the initial time, the motion remains z=0 at all times: it is equatorial. II. Vanishing angular momentum ℓ=0. The polar potential reduces to Θ(z)={[ ϵ _0(Q/ϵ _0+z)(1-z), ϵ _0≠ 0; Q(1-z), ϵ _0=0. ]. We distinguish the following cases: * ϵ _0=0 leads to motion over the whole polar range 0≤ z ≤ 1 for Q>0; we called it polar motion. The only turning point is located at z=1. For Q=0, the potential vanishes identically and the polar angle remains constant; we call it azimuthal motion; for Q<0 the potential is positive only if the motion takes place along the black hole axis z=1; we call it axial motion. * ϵ _0>0 leads to a polar motion 0≤ z≤1 for Q>0. For Q=0 and z=0, the motion is equatorial. For Q=0 and z≠0, z=0 is an asymptotic attractor of the motion which only takes place in one of the hemispheres. It is therefore a special case of equator-attractive motion where the turning point is at the pole z=1. For Q<0, the motion is either vortical (0<-Q/ϵ _0≤ z ≤ 1) for - ϵ _0<Q<0 or axial with z=1 for Q≤- ϵ _0<0. * ϵ _0<0 leads to a polar motion 0≤ z≤1 for Q≥- ϵ _0>0 and to a pendular one (0≤ z ≤ -Q/ϵ _0<1) for 0<Q<- ϵ _0. For Q=0, the motion is either equatorial or axial for the potential to be positive. For Q<0, the motion also has to take place along the axis. Let us finally notice that, for any value of ϵ _0 and Q≥ -(a E_0 )^2, an axial motion is always possible. a §.§ Solution to the polar integrals After having classified the different types of motion allowed, we will provide manifestly real and positive explicit solutions in terms of elliptic integrals for each type of polar motion with ℓ≠ 0, in line with the recent analysis <cit.>. All such integrals will turn out to agree with Ref. <cit.>, but our presentation will be slightly simpler. The solution to the polar integrals (<ref>) and (<ref>) can be organized in terms of the categories of polar motion with ℓ≠ 0: 2-4 1c| Vortical Equator-attractive Pendular 0pt13pt ϵ_0<0 ∅ ∅ Pendular(Ê,Q) 0pt13pt ϵ _0=0 ∅ ∅ Pendular_*(Q) 0pt13pt ϵ _0>0 Vortical(Ê,Q) Equator-attractive(Ê) Pendular(Ê,Q) Each type of motion yields to a specific decomposition of the line integrals in terms of basic integrals. In order to simplify the notations, we drop the “f" indices labeling the final event and define h ≡sign cosθ, θ_a ≜arccos√(z_a) (a=+,-,0), as well as the initial and final signs η_i, η: η_i ≜ - s_θ^i sign cosθ_i , η≜ - (-1)^m s_θ^i sign cosθ. We are now ready to perform the explicit decomposition: * Pendular motion. We have 0< z_+ ≤ 1, and θ therefore belongs to the interval θ_+ ≤θ≤π - θ_+. The polar integral can be written (see Ref. <cit.>) _cosθ_i^cosθ = 2 m | ∫_0^cosθ_+| -η| ∫_0^cosθ|+η_i | ∫_0^cosθ_i|, ϵ _0≠ 0, _cosθ_i^cosθ = 2 m | ∫_0^cosθ_0| -η| ∫_0^cosθ|+η_i | ∫_0^cosθ_i|, ϵ _0= 0. It is useful to note that our definitions of the roots imply ϵ _0 z_-<0, ϵ _0(z-z_-)>0, z_+/z_-≤ 1. * Vortical motion. We have ϵ _0>0 and 0 < z_-≤cos^2θ≤ z_+ < 1. The motion therefore never reaches the equator. The sign of cosθ is constant and determines whether the motion takes place in the northern or the southern hemisphere. Without loss of genericity, let us focus on the northern hemisphere: 0 ≤θ_+≤θ≤θ_- < π/2; we denote again as m the number of turning points at Mino time λ. The polar integral can be written (see Ref. <cit.> and Appendix A of Ref. <cit.>): _cosθ_i^cosθ = ( m -η_i 1-(-1)^m/2) | ∫_cosθ_-^cosθ_+| -η| ∫_cosθ_-^cosθ|+ η_i | ∫_cosθ_-^cosθ_i| . * Equator-attractive motion. This is a limit case of the vortical motion reached in the limit z_-→ 0, z_+→ 2Δ_θ. As detailed in Ref. <cit.>, the turning point z_-=0 corresponds to a non-integrable singularity of the polar integrals and the motion exhibits consequently at most one turning point at z_+=2Δ_θ, leading to the line-integral decomposition _cosθ_i^cosθ = η∫_cosθ_+^cosθ-η_i∫_cosθ_+^cosθ_i. In all cases but the equator-attractive case, the polar motion is periodic. Denoting by Λ_θ its period, one can easily give an explicit formula for the number of turning points m as a function of the Mino time: m(λ)={[ 0pt6pt⌊2/Λ_θ(λ-λ_i^θ)+1/2⌋, Q>0; 0pt16pt⌊2/Λ_θ(λ-λ_i^θ)⌋ + ⌊2/Λ_θ(λ_i^θ-λ_i)⌋ + 3-s^i_θ/2, Q<0 ]. with λ_i^θ≜λ_i-s^i_θ∫_0^cosθ_icosθ/√(Θ(cos^2 θ)) and where the floor function is defined as ⌊ x ⌋≜maxn∈ℤ|n≤ x. For the equator-attractive case, one has simply m(λ)=θ(λ-λ_i^θ) where θ is here the Heaviside step function. The integrals introduced above are solved explicitly in Appendix <ref>. For each case, the corresponding solutions are detailed below and schematically depicted in Fig. <ref>. Pendular(Ê,Q) motion. The motion exhibits a positive Carter constant Q and can occur for any ϵ _0≠ 0; our definition of the roots z_± allows us to treat simultaneously the two cases ϵ _0<0 and ϵ _0>0, which is a simplification with respect to the analysis carried out in Ref. <cit.>. The period of the polar motion (comprising two turning points) in Mino time is given by Λ_θ = 4 ∫_0^cosθ_+dcosθ/√(Θ(cos^2θ))≜ 4 I^(0)(√(z_+)) =4/√(- ϵ _0z_-)K(z_+/z_-). Using the basic integrals of Appendix <ref>, one can write (<ref>) as λ-λ_i =1/√(- ϵ _0z_-)[2mK(z_+/z_-)+s^i_θ(-1)^mF(Ψ^+(cosθ),z_+/z_-).  .-s_θ^iF(Ψ^+(cosθ_i),z_+/z_-)] where we define Ψ^+(x)≜arcsin(x/√(z_+)). Using (<ref>), one can invert (<ref>) as cosθ=s^i_θ(-1)^m√(z_+)sn(√(- ϵ _0 z_-)(λ-λ_i^θ)-2mK(z_+/z_-),z_+/z_-) where we introduce λ_i^θ ≜λ_i-s^i_θ/√(- ϵ _0z_-)F(Ψ^+(cosθ_i),z_+/z_-). This expression matches with Eq. (38) of Ref. <cit.>. Using the periodicity property (<ref>) of the elliptic sine, we can further simplify it to cosθ(λ)=s^i_θ√(z_+)sn( √(- ϵ _0 z_-)(λ-λ_i^θ), z_+/z_-). It consistently obeys cosθ(λ_i)=cosθ_i and sign cosθ'(λ_i)=s_θ^i. This formula agrees with (53) of Ref. <cit.> but it is written in a simpler form. We also obtain T_θ =-2z_+/√(- ϵ _0 z_-)[ 2 m E' (z_+/z_-) +( ±_θ) E'(Ψ^+(cosθ),z_+/z_-). . - s_θ^i E'(Ψ^+(cosθ_i),z_+/z_- )], Φ_θ = 1/√(- ϵ _0 z_-)[ 2 m Π(z_+,z_+/z_-) +( ±_θ) Π(z_+,Ψ^+(cosθ),z_+/z_-).  .- s_θ^i Π(z_+,Ψ^+(cosθ_i) ,z_+/z_-)]-(λ-λ_i). where λ-λ_i is given by (<ref>). All quantities involved are manifestly real. These final expressions agree with Ref. <cit.>. Pendular_∘(Q) motion. We now consider the critical case |Ê| = μ. The period of the polar motion is Λ_θ=4 I^(0)(√(z_0))=2π√(z_0/Q). In this critical case, (<ref>) leads to λ-λ_i=√(z_0/Q)[mπ+s^i_θ(-1)^marcsincosθ/√(z_0)-s^i_θarcsincosθ_i/√(z_0)], which can be simply inverted as cosθ=s_θ^i√(z_0) sin(√(Q/z_0)(λ-λ_i^θ)), λ_i^θ≜λ_i-√(z_0/Q)arcsincosθ_i/√(z_0). The other polar integrals are T_θ =1/2z_0(λ-λ_i)-√(z_0/Q)[(±_θ)cosθ√(z_0-cos^2θ)-s^i_θcosθ_i√(z_0-cos^2θ_i)], Φ_θ =√(z_0/Q(1-z_0))[mπ+(±_θ)arcsin(√(1-z_0/z_0)θ) -s^i_θarcsin(√(1-z_0/z_0)θ_i)]-(λ-λ_i). Vortical(Ê,Q) motion. The period in Mino time is given by Λ_θ = 2 | ∫_cosθ_-^cosθ_+dcosθ/√(Θ(cos^2θ))|=2/√(ϵ _0 z_+) K(1-z_-/z_+). Using the basic integrals of Appendix <ref>, one has λ-λ_i =1/√(ϵ _0z_+)[(m-h s^i_θ1-(-1)^m/2)K(m̃)-s^i_θ(-1)^mF(Ψ^-(cosθ),m̃).  .+s^i_θ F(Ψ^-(cosθ_i),m̃)] where m̃≜ 1-z_-/ z_+, Ψ^-(x)=arcsin√(z_+-x^2/z_+-z_-). Using the inversion formula (<ref>) and the periodicity property (<ref>), we obtain cosθ=h√(z_+)dn(√(ϵ _0z_+)(λ-λ_θ^i),m̃) with λ_i^θ≜λ_i+s^i_θ h/√(ϵ _0z_+)F(Ψ^-(cosθ_i),m̃). Again, one has cosθ(λ_i)=cosθ_i and sign cosθ'(λ_i)=s_θ^i. The two other polar integrals are T_θ =√(z_+/ϵ _0)[(m-h s^i_θ1-(-1)^m/2)E(m̃)-(±_θ)E(Ψ^-(cosθ),m̃).  .+s^i_θ E(Ψ^-(cosθ_i),m̃)], Φ_θ =1/(1-z_+)√(ϵ _0z_+)[(m-h s^i_θ1-(-1)^m/2)Π(z_–z_+/1-z_+,m̃).  .-(±_θ)Π(z_–z_+/1-z_+,Ψ^-(cosθ),m̃)+s^i_θΠ(z_–z_+/1-z_+,Ψ^-(cosθ_i),m̃)]  -(λ-λ_i) in agreement with the results of Ref. <cit.>. Equator-attractive(Ê) motion. This is the only polar motion which is not periodic. One has λ-λ_i=h/√(ϵ _0z_+)[-(±_θ) arctanh√(1-cos^2θ/z_+)+s^i_θ arctanh√(1-cos^2θ_i/z_+)] leading to cosθ =h√(z_+) sech(√(ϵ _0z_+)(λ-λ_i^θ)), λ_i^θ ≜λ_i+s^i_θ h/√(ϵ _0z_+)arctanh√(1-cos^2θ_i/z_+). The polar integrals are T_θ =h/√(ϵ _0)[-(±_θ)√(z_+-cos^2θ)+s^i_θ√(z_+-cos^2θ_i)], Φ_θ =h/√(ϵ _0(1-z_+))[-(±_θ)arctan√(z_+-cos^2θ/1-z_+) +s^i_θarctan√(z_+-cos^2θ_i/1-z_+)]. This agrees with the results of Ref. <cit.>. § CLASSIFICATION OF NEAR-HORIZON MOTION FOR HIGH SPIN KERR BLACK HOLES In this section, we derive a complete classification of timelike and null geodesic trajectories lying in the near-horizon region of a quasi extremal Kerr black hole. We will provide explicit manifestly real analytic expressions for all geodesic trajectories. We will present the classification in terms of the geodesic energy, angular momentum, and Carter constant Q. We will also illustrate each radial motion in NHEK with a Penrose diagram. Partial classifications were performed in Refs. <cit.> and <cit.>. In Ref. <cit.>, equatorial timelike prograde incoming (i.e. that originate from the Kerr exterior geometry) geodesics were classified. Such geodesics reach the spatial boundary of the near-horizon region at infinite past proper time and therefore physically reach the asymptotically flat Kerr region once the near-horizon is glued back to the exterior Kerr region. It turns out that bounded geodesics in the near-horizon Kerr region also arise in the study of gravitational waves since they correspond to the end point of the transition motion <cit.>. Timelike outgoing geodesics originating from the white hole horizon and reaching the near-horizon boundary are also relevant for particle emission within the near-horizon region <cit.>. In addition, null outgoing geodesics are relevant for black hole imaging around high-spin black holes <cit.>. The generic non-equatorial geodesics were obtained in Ref. <cit.>. In particular, real forms were obtained for each angular integral involved in geodesic motion. However, zero-measure sets of parameters were discarded. These zero-measure sets include in particular the separatrix between bounded and unbounded radial motion which plays a key role in EMRIs. In the following, we do not make any assumption on the geodesic parameters. We will treat both timelike and null geodesics, prograde or retrograde, and with any boundary conditions. Without loss of genericity, we will consider future-directed orbits. Past-directed geodesics can be obtained from future-directed geodesics using the ℤ_2 map: T → -T, Φ→ -Φ, E→ -E, ℓ→ -ℓ, which will play an important role in Sec. <ref>. We will denote it as the ↑↓-flip. §.§ NHEK Future orientation of the geodesic is equivalent to T/λ > 0 or E+ L_0 R > 0. Future-oriented geodesics with L_0 = 0 have E >0. For L_0 ≠ 0, we define the critical radius as in <cit.>: R_c = -E/ L_0 . Future-orientation of the orbit requires R < R_c for L_0 < 0, and R > R_c for L_0 > 0. §.§.§ Polar behavior The results derived in Sec. <ref> in the context of generic Kerr still hold in the near-horizon high-spin limit which is obtained by the scaling limit λ→ 0 taken in the near-horizon coordinates (<ref>). We anticipate that the results also hold in the distinct near-NHEK limit λ→ 0 taken in the near-horizon coordinates (<ref>). Due to the high-spin limit, the following substitution can be made: a ↦ M, Ê↦ℓ/2M, ϵ _0 ↦𝒞_∘≜ℓ^2-ℓ_∘^2/4, Θ(z) ↦ v_θ(z), Φ̂_θ-1/4T̂_θ↦Φ_θ. Notice that the dependence on Ê of ϵ _0 has been changed into a dependence in ℓ, the Kerr energy being the same at zeroth order on λ for all trajectories. Therefore, the quadratic term of the polar potential vanishes at the critical value ℓ_∘ of the angular momentum ℓ. One of the most striking features of the near-horizon polar motion is that Q is non-negative as a consequence of the reality of polar motion, as noticed in Ref. <cit.>: ∀ z∈[0,1] :v_θ(z)≥0⇒ Q≥0. Proof. This property is a consequence of the dependence on Q of 𝒞 defined in (<ref>). Indeed, using the fact that z=cos^2θ∈[0,1] one can write Q = 𝒞 + 3/4ℓ^2 -M^2 μ^2 ≥𝒞 + (1-Λ^-2)ℓ^2 -M^2 μ^2 ≥ v_θ (z) ≥ 0. A direct consequence is that the near-horizon polar motion cannot be vortical and is consequently either equatorial, pendular, polar or axial. We note that the condition ϵ _0≥ℓ^2 is never obeyed in the near-horizon case after using the definition (<ref>), a=M and Ê=ℓ/2M. The equator-attractive class is therefore discarded. The resulting polar classes are listed in Table <ref> and the phase space is represented in Fig.<ref>. §.§.§ Radial behavior The radial behavior for generic inclined orbits can be solved using the equatorial results <cit.> thanks to the following observation: The radial integrals T_R^(i)(R) (i=0,1,2) only depend upon the NHEK energy E and angular momentum ℓ while all the dependence upon the mass μ and Carter constant Q is through ℓ_* = 2/√(3)√(M^2 μ^2 +Q). This simple observation has far-reaching consequences. For any timelike geodesic with Q ≠ 0, one could directly reuse the classification established in Ref. <cit.>, modulo the substitution 2/√(3)M μ→ℓ_* in every expression encountered. Moreover, null geodesics with μ = 0 have Q ≥ 0 from Proposition <ref>. We can therefore reuse the classification established in Ref. <cit.> to classify null geodesics modulo the substitution M μ→ Q in every expression encountered. Overall, all radial integrals can be described in closed form for all cases by keeping the dependence upon ℓ_* or, equivalently, upon the Casimir invariant 𝒞. Since the equatorial taxonomy of Ref. <cit.> did not consider bounded orbits and only considered ℓ > 0, we will expand the taxonomy to the generic case. The generic classification can be achieved by studying the roots of v_R and the range of R where v_R ≥ 0. We only consider orbits outside the horizon, R>0. There are three broad categories depending on the angular momentum: the supercritical case |ℓ| > ℓ_* or equivalently 𝒞 < 0, the critical case |ℓ|= ℓ_* or equivalently 𝒞 = 0 and the subcritical case 0 ≤|ℓ| < ℓ_* or 𝒞 > 0. The relative position of the critical radius (<ref>) with respect to the roots of v_R may restrict the allowed classes of future-oriented orbits. As a result of (<ref>), subcritical ℓ^2 <ℓ^2_* orbits have either R_+ < R_c for ℓ < 0 or R_c < 0 for ℓ > 0, and all orbits are future oriented. Critical orbits ℓ^2 =ℓ^2_* have either R_c < 0 for ℓ = ℓ_* or R_c > R_0 for ℓ = -ℓ_*. This restricts the classes of orbits. Supercritical orbits ℓ^2 > ℓ^2_* with E,ℓ>0 are future directed. Supercritical orbits with E>0, ℓ<0 admit R_- < R_c < R_+, and only bounded orbits with R ≤ R_- are admissible. Finally, supercritical orbits with E<0 and ℓ > 0 obey R ≥ R_+ > R_c and are therefore deflecting. After a simple analysis, we reach the following taxonomy, displayed in Table <ref> and in Fig. <ref>. In comparison with Ref. <cit.>, the classes Outward(E,ℓ), Outward_*(E), Bounded_>(E, ℓ), Bounded^-_*(E), and Bounded_<(E,ℓ) are new, while all other classes with ℓ > 0 appeared in Ref. <cit.>. The class Osculating(E,ℓ) is now better called Deflecting(E,ℓ). The classes with ℓ = ±ℓ_* will be denoted with a subscript _*. The Spherical_* orbit with ℓ = ℓ_* is also the prograde ISSO. For ℓ≥ 0, the conformal diagrams corresponding to those orbits are depicted in Fig. <ref> and their explicit forms are given in Appendix <ref>. Past-oriented geodesics (not depicted) are obtained from a central symmetry around the origin E=ℓ=0 as a result of the ↑↓-flip (<ref>). a §.§ Near-NHEK The only difference between NHEK and near-NHEK geodesic solutions lies in the terms involving the radial coordinate. The proposition stating the equivalence relation between the equatorial and inclined radial parts of the geodesic motion takes the same form as in NHEK: For a given normalization κ, the radial integrals t^(i)_R;κ(R) (i=0,1,2) only depend upon the near-NHEK energy e and angular momentum ℓ while all the dependence upon the mass μ and Carter constant Q is through ℓ_* = 2/√(3)√(M^2 μ^2 +Q). As in NHEK, the radial taxonomy of Ref. <cit.> is easily extended to bounded, outward and/or retrograde orbits by studying the roots and the sign of v_R;κ(R). This leads to the classification displayed in Table <ref> and Figure <ref>. The future-orientation condition (<ref>) implies e> -κℓ for each orbit that reaches the horizon at R=κ. In the case ℓ > ℓ_* and e < 0, the condition e ≤ -κ√(-𝒞) implies e+κℓ≥ 0 and therefore the parabola does not intersect the line. Past-oriented geodesics (not depicted here) are obtained from a central symmetry around the origin e=ℓ=0 as a result of the ↑↓-flip (<ref>). The explicit expressions of all near-NHEK geodesics are listed in Appendix <ref>. Also notice that the energy range of the Deflecting(e,ℓ) class has been corrected in this thesis with respect to the original derivation <cit.>, accordingly to the observation of <cit.>. §.§ High-spin features of geodesic motion Let us now discuss a few generic and universal features of near-horizon geodesic motion holding in the high-spin case. §.§.§ Radial motion A first straightforward conclusion one can derive from the analysis of the near-horizon radial geodesic motion is that All radially unbounded NHEK or near-NHEK geodesics are prograde and either critical or supercritical; i.e., they satisfy ℓ≥ℓ_*. This feature of the near-horizon radial motion is directly visible in Figs. <ref> and <ref> and leads to remarkable consequences concerning the polar behavior of such trajectories that we will derive in the following section. The separatrix between bound and unbound motion is clearly visible in Figs. <ref> and <ref>. It consists of the geodesic classes Plunging_*(E) and Outward_*(E) for NHEK and the geodesic classes Plunging_*(e), Outward_*(e), and Bounded_*(e) for near-NHEK that each lie at the critical angular momentum line ℓ = ℓ_*. §.§.§ Polar motion The polar motion of both NHEK and near-NHEK trajectories is bounded in an interval around the equator, θ_min≤θ≤π - θ_min, where cosθ_min=√(z_+) or cosθ_min=√(z_0). The maximal polar angle is determined for ℓ^2 ≠ℓ^2_∘ = 4 M^2 μ^2 as z_+(ℓ ,Q ) =3ℓ^2+4(Q+M^2μ^2)-√(9ℓ^4+16(M^2μ^2-Q)^2+8ℓ^2(3M^2μ^2+5Q))/2(4M^2μ^2-ℓ^2) and for ℓ = ±ℓ_∘ as z_0(Q) = lim_ℓ→± 2 M μ z_+ = Q/Q+4 M^2 μ^2. Remember that Q ≥ 0 by consistency of polar motion. The asymptotic values are lim_[ Q→ 0; ℓ fixed ]z_+(ℓ,Q) =0, lim_[ Q→∞; ℓ fixed ]z_+(ℓ,Q)=1, lim_[ ℓ→ 0; Q fixed ]z_+(ℓ,Q) ={[ Q/M^2μ^2 if Q<M^2μ^2; 1 if Q≥ M^2μ^2 ]. , lim_[ ℓ→∞; Q fixed ]z_+(ℓ,Q) = 0. For fixed ℓ, z_+ is a monotonic function of Q, and reciprocally z_+ is monotonic in ℓ at fixed Q. The pendular oscillation around the equatorial plane will explore a larger range of θ when θ_min is smallest or z_+ closer to 1, which occurs either for small ℓ and Q ≥ M^2 μ^2 or large Q. Now, one can check that for critical or supercritical angular momentum ℓ^2 ≥ℓ^2_*(Q), one has z_+ < 2 √(3)-3 for ℓ≠ℓ_∘(Q) and z_0 < 2 √(3)-3 for ℓ^2 = ℓ^2_∘. The special angle θ_VLS≜arccos√(2√(3)-3)≈ 47^∘ is in fact the velocity-of-light surface in the NHEK geometry (<ref>) (or near-NHEK geometry) defined as the polar angle such that ∂_T is null. It obeys Λ(θ_VLS) = 1. The polar region closer to either the north or south poles admits a timelike Killing vector, namely ∂_T. On the contrary, the polar region around the equator θ∈ ]θ_VLS,π-θ_VLS[ does not admit a timelike Killing vector. The velocity-of-light surface separates these two polar regions. We have therefore proven the following property: All critical or supercritical orbits ℓ^2 ≥ℓ^2_* in (near-)NHEK geometry lie in the polar region θ∈ ]θ_VLS,π-θ_VLS[ where there is no timelike Killing vector. This applies in particular to all spherical orbits. The subcritical orbits ℓ^2 < ℓ^2_* can explore all polar regions of the (near)-NHEK geometry. As a consequence of Propositions <ref> and <ref>, we have All radially unbounded geodesics in (near-)NHEK geometry lie in the polar region θ∈ ]θ_VLS,π-θ_VLS[ bounded by the velocity-of-light surface. In particular, for null geodesics, this feature provides the “NHEKline” in the imaging of light sources around a nearly extreme Kerr black hole <cit.>. In <cit.>, Proposition <ref> was proven for null geodesics. Here, we show that it is a generic property of all timelike geodesics as well. § SPHERICAL GEODESICS The spherical (near-)NHEK geodesics take a distinguished role among all geodesics. First, a subclass of spherical geodesics in NHEK and near-NHEK constitute the innermost stable spherical orbits (ISSOs) and the innermost spherical bound orbits (ISBOs) in the high-spin limit, respectively. Our first motivation is to fully characterize the ISSO, in order to generalize the analysis of the inspiral/merger transition performed around the equatorial plane in the high-spin limit <cit.> to inclined orbits. Second, as noticed in Ref. <cit.>, the equatorial NHEK (resp. near-NHEK) orbits are the simplest representatives for each equivalence class of prograde incoming critical (respectively, supercritical) equatorial orbits under (2,ℝ) ×(1) ×ℤ_2 symmetry. We will show in Sec. <ref> that the spherical (near-)NHEK orbits are the simplest representatives for each equivalence class of arbitrary timelike (near-)NHEK geodesics under (2,ℝ) ×(1) × (ℤ_2)^3 symmetry without any restriction. These two reasons justify the comprehensive study of the spherical geodesics. §.§ Innermost stable spherical orbits The ISSOs are defined as the last stable spherical orbits of Kerr. They are defined from the solutions to R(r) =R'(r)= R”(r) = 0 where R is defined in (<ref>). They admit a constant radius r and a fixed Ê and ℓ, which can be obtained as solutions of polynomial equations which we will not give explicitly. There are two branches at positive Ê corresponding to prograde (ℓ≥ 0) and retrograde orbits (ℓ < 0). For the Schwarzschild black hole, the parameters on the two branches of the ISSO are r_ISSO=6M, Ê_ISSO/μ= 2√(2)/3, ℓ_ISSO/μ M = ±√(12 - Q/M^2μ^2), which implies the bound Q ≤ 12 M^2 μ^2. For arbitary spin, the innermost stable circular orbit (ISCO) is defined as the prograde ISSO equatorial orbit, i.e. restricted to Q=0 (θ = π/2). The parameters are <cit.> Ê_ISCO/M μ = 1-2 /r̃_ISCO-ã/r̃_ISCO^3/2/√(1-3/r̃_ISCO-2ã / r̃_ISCO^3/2), ℓ_ISCO/μ M = 2/√(3 r̃_ISCO) (3 √(r̃_ISCO) +2ã), where ã = a/M and r̃_ISCO ≜r_ISCO/M =3+Z_2-√((3-Z_1)(3+Z_1+2Z_2)), Z_1 ≜ 1+(1-ã^2)^1/3[(1+ã)^1/3+(1-ã)^1/3], Z_2 ≜√(3ã^2+(Z_1)^2). §.§.§ Minimal polar angle In the generic case ℓ≠ 0, the polar motion is pendular – i.e., oscillating around the equator in the interval [θ_min,π-θ_min]. The minimal angle as a function of the spin a and ISCO radius r_ISSO can simply be found by solving numerically the three equations (<ref>) that define the ISSO together with the condition that there is a polar turning point, Θ(cosθ_min) = 0 where Θ(cos^2θ) is defined in (<ref>). The resulting minimal angle is displayed in Fig. <ref> for a large range of spins including nearly extremal. This completes a similar plot drawn in Ref. <cit.> for spins far from extremality. We note that for high-spins, the radius asymptotes to r = M and the minimal angle reaches a critical value around 0.42 radians or 65^∘. When the motion reaches regions sufficiently far from the equatorial plane, the ISSO radius increases steeply and leaves the near-horizon region r ≃ M. Another graphical representation of this behavior is shown in Fig. <ref>. We will explain these features in the next section. §.§ The NHEK spherical orbit and the high-spin ISSOs In the high-spin limit a → M, the prograde ISSOs are characterized by the following Boyer-Lindquist energies and angular momentum: Ê_ISSO= 1/√(3)M√(M^2 μ^2 +Q), ℓ_ISSO = + 2M Ê_ISSO and the following Boyer-Lindquist radius: r_ISSO = M + M( Q+M^2 μ^2/-Q+M^2 μ^2/2)^1/3λ^2/3+𝒪(λ^4/3). Given the scaling in λ, for the range 0 ≤ Q ≤M^2 μ^2/2, the ISSOs belong to the NHEK geometry and admit the NHEK radius R = R_ISSO≜( Q+M^2 μ^2/-Q+M^2 μ^2/2)^1/3. In particular, the ISCO has the minimal radius R_ISCO=2^1/3. In terms of NHEK quantities, the orbits admit a critical angular momentum and a vanishing NHEK energy, ℓ = ℓ_* ≜2/√(3)√(Q+M^2 μ^2), E = 0. In the high-spin limit, the prograde ISSOs in the range (<ref>) are therefore exactly the Spherical_*(Q) orbits in the classification of Sec. <ref>. The prograde ISSOs outside the range (<ref>) and the retrograde ISSOs do not belong to the near-horizon geometry and will not be described here. In terms of polar behavior, Spherical_*(Q) orbits are instances of Pendular(Q,ℓ_*) motion (except for Q=0, where they are just equatorial orbits). In the range (<ref>), they admit an ϵ _0 as defined in (<ref>) given by ϵ _0 = Q-2 M^2 μ^2/3 < 0, and the angular momentum lies below the value ℓ_∘: ℓ_* ≤√(2) M μ < ℓ_∘. The main property of Pendular(Q,ℓ_*) motion is that the polar angle θ is bounded in an interval around the equator (see (<ref>) and (<ref>)) : θ∈[θ_min,π-θ_min] where cosθ_min=√(z_+)=√(Q/3/4ℓ^2_*+√(9/16ℓ_*^4-ℓ_*^2 Q/2+Q^2)). At fixed Mμ, θ_min(Q) is a monotonic function interpolating between the equator θ=90^∘ at Q=0 and θ_VLS≜arccos√(2√(3)-3)≈ 47^∘ for Q →∞. The special angle θ_VLS is the velocity-of-light surface in the NHEK geometry (<ref>) as described in Sec. <ref>. The ISSO therefore always lies in the region of NHEK spacetime around the equator, where there is no timelike Killing vector. This is depicted in Fig. <ref>. However, since the ISSO admits the range (<ref>) due to its relationship to the asymptotically flat Boyer-Lindquist radius (<ref>), the limiting angle is reached first for Q=M^2 μ^2/2 at arccos√(3-2√(2))≈ 65^∘. This explains the behavior depicted in Fig. <ref>. This result was discovered simultaneously in <cit.>. The limiting angle of the ISSO is given by arcsin√(2(√(2)-1)) = arccos√(3-2√(2))≈ 65^∘. §.§ The near-NHEK spherical orbits and the high-spin IBSOs The innermost bound spherical orbits (IBSOs) are determined by the equations R(r) = R'(r) = 0, Ê = μ. In the high-spin limit λ→ 0, the angular momentum and Boyer-Lindquist radius of the prograde IBSOs are given by ℓ = ℓ_∘(1+λ/√(2)√(1-Q/2M^2 μ^2)+𝒪(λ^2)), r = M (1+ √(2)λ/√(1-Q/2M^2 μ^2)+𝒪(λ^2)) where ℓ_∘≡ 2M μ. In particular, for Q=0 we recover the scaling of the innermost bound circular orbit (IBCO) <cit.>. Given the scaling ∼λ, the prograde IBCOs therefore lie in the near-NHEK region for all Q < 2M^2 μ^2. Using (<ref>)–(<ref>), the angular momentum, near-NHEK energy and near-NHEK radius are given in the high-spin limit by ℓ = ℓ_∘, e/κ = -√(2M^2 μ^2 -Q), r/κ = √(2)λ/√(1-Q/2M^2 μ^2). The prograde IBCOs in the range 0 ≤ Q < 2M^2 μ^2 are described by instances of Spherical(ℓ) orbits. In terms of polar motion, Q=0 are equatorial and Q>0 are pendular of class Pendular_∘(Q); see Table <ref>. The polar range is determined as θ_min≤θ≤π - θ_min where θ_min = arccos√(Q/Q+ℓ_∘^2). The maximal polar angle reachable within the near-NHEK region by IBSOs is obtained for the limiting value Q = 2M^2 μ^2 at θ_min = arccos√(1/3) = arcsin√(2/3)≈ 55^∘. This critical angle was also previously obtained in Refs. <cit.>. Finally, note that spherical photon orbits in the high-spin limit were also discussed in Refs. <cit.>. § CONFORMAL MAPPINGS BETWEEN RADIAL CLASSES The near-horizon region of near-extremal Kerr black holes admits four Killing vectors forming the group (2,ℝ) ×(1), hereafter denoted as the conformal group G. The geodesic equations are invariant under G and the geodesics therefore transform under the action of G. Moreover, a group generated by four ℤ_2 symmetries exists that preserve the geodesic equations. The subgroup preserving the domain R > 0 for NHEK (or r > 0 for near-NHEK) is generated by the ↑↓-flip (<ref>), which flips the geodesic orientation, and two additional ℤ_2 transformations that preserve the geodesic orientation: namely, the parity flip θ→π - θ, Φ→Φ + π, s_θ^i→ -s_θ^i, and the ⇄-flip T → -T, Φ→ -Φ, λ→ -λ, s_R^i → -s_R^i, s_θ^i → -s_θ^i . The last discrete transformation that we use as a basis is the [origin=c]-45⇄-flip R → -R, Φ→ -Φ, ℓ→ -ℓ, s^i_R → -s^i_R. The parity transformation defined in (<ref>) leaves each motion invariant and will not be considered further. The ⇄-flip changes the boundary conditions of the geodesics, which may affect their denomination. It maps bounded orbits to bounded orbits, and deflecting orbits to deflecting orbits, but plunging orbits to outward orbits, as illustrated in Fig. <ref>. For bounded orbits, the part before the turning point is mapped to the part after the turning point, and vice-versa. The [origin=c]-45⇄-flip can be used as follows: one first continues a geodesic defined in R > 0 beyond the horizon R = 0 and the resulting geodesic with R < 0 is then mapped to a geodesic in the R > 0 region using the [origin=c]-45⇄-flip. Together with the action of (<ref>), it allows us to map plunging orbits with ℓ > 0 to bounded orbits with ℓ < 0. This process is illustrated in Fig. <ref>. The equivalence classes of equatorial critical and supercritical prograde timelike geodesics under the action of (2,ℝ) ×(1) ×↑↓ symmetry were derived in Ref. <cit.> following earlier work <cit.>. In this section, we will perform the decomposition of arbitrary geodesics into equivalence classes under the action of (2,ℝ) ×(1) ×↑↓×⇄×[origin=c]-45⇄. The Casimir 𝒞 of (2,ℝ) cannot vary upon acting with G ≜(2,ℝ) ×(1) transformations. Moreover, the action of the group G acts trivially on the polar coordinate θ. These two properties imply that both Q and ℓ are invariant under the action of G. In particular, critical, supercritical or subcritical geodesics form distinct classes under G. On the contrary, the (near-)NHEK energy E (or e) can vary under conformal transformations. Conformal transformations can map NHEK to near-NHEK orbits, and vice-versa. As a result of Propositions <ref> and <ref>, null geodesics can be treated on the same footing as timelike geodesics. A conformal transformation belonging to (2,ℝ) ×(1) maps (near)-NHEK spacetime parametrized by (T,R,θ,Φ) to (near-)NHEK spacetime parametrized by (T̅,R̅,θ,Φ̅)[We denote here without distinction NHEK and near-NHEK coordinates with capital letters.] where T =T(T,R), R =R(T,R), Φ =Φ+δΦ̅(T,R). The geodesic equations in (near)-NHEK imply T=T(R). Therefore, the action of conformal symmetries reduces to an action on the radial motion, leaving the polar motion unchanged. More precisely, in the decomposition of Φ(λ) (<ref>)–(<ref>) in terms of a radial part and a polar part, the polar part will remain untouched by conformal transformations. It was shown in Ref. <cit.> that each equivalence class of equatorial prograde critical (respectively, supercritical) geodesics with incoming boundary conditions under G ×↑↓ admits a distinguished simple representative, namely the NHEK (respectively, near-NHEK) circular orbits. After analysis, we obtain that each geodesic equivalence class under G ×↑↓×⇄×[origin=c]-45⇄ admits a spherical orbit as the simplest representative as illustrated in Fig. <ref>. Past directed geodesics must be considered as intermediate steps in order to relate each future directed geodesic to spherical geodesics. Supercritical orbits (ℓ^2 > ℓ^2_*) admit the near-NHEK Spherical(ℓ) orbit as a representative and critical orbits (ℓ = ±ℓ_*) admit the NHEK Spherical_* orbit as a representative. No subcritical spherical geodesic exists. However, we introduce an analytically continued complex subcritical geodesic by continuing the radius R_0 ↦ i R_0 and show that it generates the subcritical class. The explicit formulas for the three categories of equivalence classes of orbits under G ×↑↓×⇄×[origin=c]-45⇄ are given in the following sections. We will denote the final coordinates and orbital parameters reached by the conformal mappings with bars. §.§ Critical 𝒞=0 Spherical_* ⇔ Plunging_*(E) (NHEK/NHEK). The conformal mapping is given by T̅ = - R^2T/R^2T^2-1, R̅ =R^2T^2-1/R, Φ̅ =Φ+logRT+1/RT-1-iπ. It maps a (future-directed) NHEK spherical trajectory of radius R_0 to a (future-directed) critical plunge of energy E̅=2ℓ_*/R_0 > 0. Spherical_* ⇔ Plunging_* (NHEK/near-NHEK). One performs the NHEK/near-NHEK diffeomorphism (T,R,θ,Φ)→(t̅,R̅,θ,ϕ̅), whose explicit form is T =-exp(-κt̅)R̅/√(R̅^2-κ^2), R =1/κexp(κt̅)√(R̅^2-κ^2), Φ =ϕ-1/2logR̅-κ/R̅+κ. Its inverse is t̅ = 1/κlogR/√(R^2T^2-1), R̅ =-κ RT, ϕ̅ = Φ+1/2logRT+1/RT-1 for R>0 and RT<-1. The orbital parameters are related as R_0=1/κexp(κ t_0), Φ_0=ϕ_0-3/4. Plunging_* ⇔ Outward_* (near-NHEK/near-NHEK). The orbits are related by the ⇄-flip (<ref>). Plunging_* ⇔ Plunging_*(e) (near-NHEK/near-NHEK). The two (future-directed) orbits are related via the diffeomorphism t̅ = 1/2κlog√(R^2-κ^2)coshκ t-R/√(R^2-κ^2)coshκ t+R-iπ/κ, R̅ = √(R^2-κ^2)sinhκ t, ϕ̅ = ϕ + 1/2logRsinhκ t+κcoshκ t/Rsinhκ t-κcoshκ t. The energy of the new trajectory is a function of the initial time t_0 of the former one: e̅=κ^2ℓ_*exp(-κ t_0) > 0. Plunging_*(e) ⇔ Outward_*(e) (near-NHEK/near-NHEK). The orbits are related by the ⇄-flip. Plunging_*(E) ⇔ Bounded_*^-(E) (NHEK/NHEK). The critical bounded orbit is obtained from the plunging orbit by a continuation of the trajectory beyond the horizon (R<0) combined with ℤ_2 flips. One must proceed in three steps: * Continue the plunge defined from the physical domain 0≤ R≤∞ to its whole domain of definition R_0≤ R≤∞ (i.e., up to the root of the radial potential R_0=-e/2ℓ_*) and consider now only the part of the trajectory located beyond the horizon R_0 ≤ R≤ 0. * Apply the [origin=c]-45⇄-flip to the latter part of the solution. This transformation restores the positivity of the radial coordinate. It preserves the time orientation of the geodesic but flips the sign of its angular momentum ℓ_*→-ℓ_*. The new domain of definition of the trajectory is consequently 0≤ R ≤E/2ℓ_*. * The procedure outlined above only leads to the part of the geodesic with R'(λ)>0, which is located before the turning point. As outlined in Appendix <ref>, the part of a bounded trajectory located after the turning point can be obtained from the one located before it by a ⇄-flip. This whole procedure is represented in Fig. <ref>. Plunging_*(e) ⇔ Bounded_*^-(e) (near-NHEK/near-NHEK). The mapping is similar to the one outlined above using the [origin=c]-45⇄-flip. One subtlety is that one should start with the Plunging_*(e) orbit with e > κℓ_* in order to obtain the future-directed Bounded_*^-(e) orbit. Plunging_*(e) ⇔ Bounded_*(-e) (near-NHEK/near-NHEK). We apply the [origin=c]-45⇄-flip as outlined in the previous paragraph, but now choosing 0< e < κℓ_*. This leads to a retrograde past-directed bounded orbit. The future-directed prograde geodesic is then reached using the ↑↓-flip. §.§ Supercritical 𝒞<0 Spherical(ℓ) ⇔ Marginal(ℓ) (near-NHEK/NHEK). One applies the NHEK/near-NHEK diffeomorphism T =-exp(-κt̅)R̅/√(R̅^2-κ^2), R =1/κexp(κt̅)√(R̅^2-κ^2), Φ =ϕ-1/2logR̅-κ/R̅+κ which maps the orbit Spherical(ℓ) on the past-directed Marginal(-ℓ) orbit. The future-directed Marginal(ℓ) orbit is recovered by composing this transformation with a ↑↓-flip. Marginal(ℓ) ⇔ Plunging(E,ℓ) or Deflecting(E,ℓ) (NHEK/NHEK). One performs the transformation (ζ≠ 0) T̅ =1/R̅2R^2Tcosζ-(1+R^2(1-T^2))sinζ/2 R, R̅ =R^2(1+T^2)-1+(1+R^2(1-T^2))cosζ+2R^2Tsinζ/2R, Φ̅ = Φ+logcosζ/2R+sinζ/2(RT+1)/cosζ/2R+sinζ/2(RT-1). As outlined in Ref. <cit.>, this mapping can be viewed as the action on Poincaré NHEK coordinates of a shift of the global NHEK time τ→τ-ζ. The energy of the final orbit is E̅=√(-𝒞)(sinζ+T_0(cosζ-1)). We directly see that any energy E≠ 0 can be reached by conveniently choosing the values of T_0 and ζ. Plunging(E,ℓ) ⇔ Outward(E,ℓ) (NHEK/NHEK). The orbits are related by the ⇄-flip. Plunging(E,ℓ) ⇔ Bounded_>(E,-ℓ) (NHEK/NHEK). The mapping consists in extending the radial range of the plunging orbit beyond the horizon, R<0, then using the [origin=c]-45⇄-flip, which leads to the Bounded_>(E,-ℓ) orbit. Spherical(ℓ) ⇔ Plunging(e,ℓ) or Deflecting(e,ℓ) (near-NHEK/ near-NHEK). One uses the diffeomorphism (χ≠± 1) t = 1/κlog√(R̅^2-κ^2)coshκt̅-R̅/√( R^2-κ^2), R =√(R̅^2-κ^2)(sinhκt̅+χcoshκt̅)-χR̅, ϕ = ϕ̅-1/2log[√(R̅^2-κ^2)-R̅coshκt̅+κsinhκt̅/√(R̅^2-κ^2)-R̅coshκt̅-κsinhκt̅R+κ/R-κ]. This mapping can be seen as a NHEK global time shift written in near-NHEK coordinates; see Refs. <cit.>. The explicit inversion formula can be found in Ref. <cit.>. The energy of the new trajectory reads as e̅=κ√(-𝒞) χ. For -ℓ/√(-𝒞)<χ<-1, the orbit reached is future-directed and deflecting. The trajectory becomes plunging for χ>-1. Note that for χ>1, t̅_0=-1/2κlog1+χ/1-χ is complex and one has to perform an additional shift on t̅ to make it real. Plunging(e,ℓ) ⇔ Outward(e,ℓ) (near-NHEK/near-NHEK). The orbits are related by the ⇄-flip. Plunging(e,ℓ) ⇔ Bounded_>(e,-ℓ) (near-NHEK/near-NHEK). The mapping consists in extending the radial range of the plunging orbit with e > κℓ beyond the horizon, r<0, then using the [origin=c]-45⇄-flip, which leads to the Bounded_>(e,-ℓ) orbit. §.§ Subcritical 𝒞>0 There is no near-NHEK spherical geodesic for 𝒞>0. We can nevertheless introduce the formal class of complex spherical trajectories t(λ) = -iℓ/R_0λ, R(λ) = i R_0, R_0≜κℓ/√(𝒞), ϕ(λ) = ϕ_0-3/4ℓλ+ℓΦ_θ(λ) which is a formal (but nonphysical) solution of the near-NHEK geodesic equations, of complex near-NHEK “energy” e=-iκ√(𝒞). We will denote this class of solutions as Spherical_ℂ(ℓ) and show that it can be used to generate all subcritical bounded trajectories by acting on it with properly chosen conformal transformations. The parametrized form of the orbit reads as R = i R_0, ϕ(t) = ϕ_0-3/4iR_0t+ℓΦ_θ(λ(t)). Spherical_ℂ(ℓ) ⇔ Bounded_<(E,ℓ). One has to proceed in two steps, mimicking the procedure used to obtain the NHEK Plunging(E,ℓ) class: * We apply the near-NHEK/NHEK diffeomorphism (<ref>) to a Spherical_ℂ(ℓ) orbit, leading to another complex NHEK geodesic of null energy parametrized by T(R) = -iℓ/√(C)R, Φ(R) = Φ_0-3iℓ/8√(C)log𝒞R^2/𝒞+ℓ^2 with the initial azimuthal angle Φ_0≜ϕ_0-3πℓ/8√(C)-1/2log(1-2√(C)/√(C)+iℓ). We denote this class as Marginal_ℂ(ℓ). * Second, we apply to the trajectory found above the global time shift (<ref>), but upgraded with an imaginary parameter ζ→ iζ. This leads to the Bounded_<(E,ℓ) class with orbital parameters E̅ = √(𝒞)sinhζ, Φ̅_0 = ϕ_0-3πℓ/8 √(𝒞)-log(√(𝒞)-iℓ)+3iℓ/8√(𝒞)log[𝒞(𝒞+ℓ^2)(1+√(𝒞+E^2/𝒞))^2]  -3ℓ/8√(C)log[E^2(𝒞+ℓ^2)]+arctan√(𝒞)/ℓ. Note that choosing ζ>0 is sufficient to reach the full range of energies allowed for such a geodesic (E>0). Any geodesic of orbital parameters (T_0,Φ̃_0) can finally be obtained by performing the transformation T→ T+T_0, Φ→Φ-Φ̅_0+Φ̃_0, which also removes the unphysical imaginary part of the azimuthal coordinate. Spherical_ℂ(ℓ) ⇔ Bounded_<(e,ℓ). We apply to the Spherical_ℂ(ℓ) class the near-NHEK global time shift (<ref>) upgraded with an imaginary parameter χ→ iχ (χ≠±1), leading to a Bounded_<(e,ℓ) orbit of parameters e̅ = κ√(𝒞) χ, t̅_0 = t_0+i/κarctanκ√(𝒞)/e, ϕ̅_0 = ϕ̅_0(ϕ_0,e,ℓ,𝒞,κ). The explicit value of ϕ̅_0 is easily calculable, but too long to be reproduced here. To reach a manifestly real orbit of orbital parameters (t̃_0,ϕ̃_0), one has to perform the final shift t→ t-t̅_0+t̃_0, ϕ→ϕ-ϕ̅_0+ϕ̃_0. [width=]cloud_ii.png PART: [Test bodies in Curved Spacetime: Theoretical Foundations]Test bodies in Curved Spacetime:Theoretical Foundations Let us consider the motion of a object described by some smooth stress-energy tensor T_μν in a fixed background metric g_μν, thus neglecting self-force effects. Provided that the body has a finite spatial extension, its stress-energy tensor is supported on compact slices for any 3+1 decomposition of the spacetime. Such an object will be referred to as an extended test body. We have in mind the motion of a “small” astrophysical object (stellar mass black hole or neutron star) around a hypermassive black hole. In this situation, the former can be viewed as a perturbation of the spacetime geometry created by the later. While the geodesic equations describe the motion of a structureless monopole test body in a fixed background spacetime, an important generalization is to allow the test body (while still having negligible mass and thus negligible influence on the gravitational field) to have a finite size and a nontrivial structure. All these effects – departing from a bare geodesic motion – are known as finite size effects. §.§.§ Worldline description of extended test bodies In the case where the extended test body is compact, that is, if its typical size ℓ is small compared to the radius of curvature r of the background (ℓ≪ r), there exists various equivalent approximation schemes for describing its motion in a somehow simpler way than considering its full stress-energy tensor. Both approaches end into characterizing the body by a centroid worldline γ=z^μ(λ) along which a tower of gravitational multipole moments I^μνα_1…α_n replacing the original stress-energy tensor are defined, see Figure <ref>. These moments can be understood as spatial integrals of T^μν, I^μνα_1…α_n≜∫_x^0=cst[3]x√(-g)T^μνδ x^α_1…δ x^α_n, where δ x^μ≜ x^μ-z^μ(λ). The first of these schemes is known as the gravitational skeletonization: the body is described by a distributional stress-energy tensor, which is non-vanishing only on a certain worldline, and contains the aforementioned tower of multipole moments. This tensor must be conserved within the background, ∇_μ T^μν=0, and one can show that it implies that the monopole p_μ and dipole S_μν must evolve according to the Mathisson-Papapetrou-Dixon (MPD) equations <cit.> Dp^μ/τ =-1/2Rμναβv^ν S^αβ+…, DS^μν/τ=2 p^[μv^ν]+…, where the dots represent corrections due to the quadrupole and higher multipole moments. The monopole p^μ takes the interpretation of the linear momentum of the body, whereas the dipole S^μν can be seen as its skew-symmetric spin tensor, describing the relativistic angular momentum of the body. This last object will play a central role in our description, which is the reason why we will sometimes refer to extended test bodies as spinning test bodies. In terms of the original smooth stress-energy distribution, one can show that the two first moments are related to the original stress-energy tensor by p^μ ≜∫_x^0=constant[3]x√(-g) T^μ 0, S^μν ≜∫_x^0=constant[3]x√(-g) (δ x^μ T^ν0-δ x^νT^μ 0). This approach has been investigated since the late thirties. The leading-order EOMs were first derived in the seminal works of M. Mathisson <cit.> and A. Papapetrou <cit.>. They have been subsequently generalized to higher multipolar orders by W.G. Dixon <cit.>. Despite its elegance and rigour, this approach appears to be quite long to perform and technically involved, discarding it from being a well-suited viewpoint for exposing comprehensively the problem in an introductory text like the present one. We will instead follow the Lagrangian approach, whose generic formulation in curved spacetime is due to I. Bailey and W. Israel in 1975 <cit.>. Nevertheless, this method was pioneered for Special Relativity in earlier works, notably by A.J. Hanson and T. Regge (see e.g. <cit.>). It consists in formulating a generic action principle for the extended body modelled as a worldline, representing the motion of some “center” of its stress-energy distribution, and endowed with an orthonormal tetrad rigidly attached to it, whose evolution describes the orientation of the body. This approach also leads to the very same MPD equations. Having two equivalent descriptions of the same problem is extremely fruitful, since each of them turns out to be more appropriated for different purposes: as we will see, the skeletonization will be powerful for providing us with physical insights about the interpretation of multipole moments, while the Lagrangian approach will reveal particularly useful when we will turn to the Hamiltonian description of extended test bodies. Others routes yielding the same equations of motion have also been followed. A supersymmetric description of classical spinning particles has been provided in 1993 by G.W. Gibbons, R.H. Rietdijk and J.W. van Holten <cit.>, and recently extended to include quadrupole effects <cit.>. Another recent (and somehow elegant) formulation accounting for the description of finite size effects is due to A. Harte <cit.>, using the concept of “generalized Killing vector fields”. Its main interest it that it allows naturally to account for the inclusion of gravitational back-reaction effects. §.§.§ Spin supplementary conditions There is a technical subtlety arising when studying the motion of test bodies described by the MPD equations. In order to obtain a closed system of equations, they have to be supplemented by an algebraic condition of the form 𝒱_μ S^μν=0, for some timelike vector field 𝒱^μ. Such conditions are known as spin supplementary conditions. Physically, enforcing this kind of condition amounts to fix a choice of centroid worldline, setting to zero the mass-dipole moment in the proper frame whose timelike vector is aligned 𝒱^μ. The discussion of what is really happening is quite subtle, the main reason being that the notion of center of mass is observer-dependent in relativity. §.§.§ Truncation of the multipole expansion Another point of concern is to understand if the multipole expansion introduced above can be consistently truncated as some desired order, that is, if there exists a small parameter such that the magnitude of the successive multipoles decreases when the order of the multipoles increases. As we will see in Chapter <ref>, for compact objects, this small parameter will be the ratio between the typical size of the object and the typical radius of the background curvature, which is small by assumption. As discussed in the introduction of the thesis, we will always truncate the expansion at the quadrupole order. This is the first order in the multipole expansion where the internal structure of the body begins to matter. At pole-dipole order, the motion of finite size bodies is universal, in the sense that it is independent of the nature of the object. Because we exclude the self-force in our description, our expansion will thus be valid at zeroth order in the mass ratio and at second order in the spin. MPD equations promote the two first multipole p^μ and S^μν to the rank of dynamical variables, but leave the higher order multipoles acting as sources. The latter shall consequently be prescribed depending on the internal structure of the test body. In this thesis, we will only be concerned with multipole moments induced by the proper rotation of the object, also known as spin-induced multipoles. We therefore discard tidal and other type of contributions to the multipole structure. As we will see, this is the relevant description for modelling a binary black hole system evolving in vacuum, and this is the choice of multipole structure that will allow the existence of the largest number of conserved quantities along the motion. Actually, for compact test bodies, they are several equivalent way of thinking about the truncation of the multipole expansion, which are consistent one to another, as we will check explicitly: (i) as a truncation of the number of multipoles that we use to describe the stress-energy tensor, which is the viewpoint of gravitational skeletonization that will be discussed in Chapter <ref>; (ii) as a truncation of the number of the derivatives of the background Riemann tensor upon which the action of the Lagrangian formulation can depend upon, as will be described in Chapter <ref>; and finally (iii) as an expansion in integer powers of the magnitude of the spin dipole 𝒮. This latter viewpoint turns out to be consistent with the two former for spin-induced multipoles, as will be reviewed in Chapter <ref>. §.§.§ Plan of the text This part of the thesis is organized as follows: Chapter <ref> will describe the Lagrangian formulation for extended test bodies in full generality, up to quadrupole order. In Chapter <ref>, we will discuss a particularly simple form of the gravitational skeletonization up to dipole order, which will enable to gain more intuition about the physical meaning of the monopole and the dipole moments. Our discussion will be however specialized to a specific choice of coordinates. Chapter <ref> will describe the problem of enforcing the aforementioned spin supplementary conditions, as well as their physical interpretation. Finally, Chapter <ref> will be devoted to spin-induced multipoles, and will focus on the explicit construction of the spin-induced quadrupole moment. CHAPTER: LAGRANGIAN FORMULATION This chapter discusses the Lagrangian formalism for spinning test bodies in General Relativity. In this text, we will always restrict our derivations up to quadrupole order. Nevertheless, higher orders can be reached, see e.g. <cit.>. This chapter mainly follows the excellent exposition of Marsat <cit.>. The core idea of the Lagrangian approach is to construct the most generic worldline Lagrangian action S=∫ L λ describing the motion of a spinning test body in curved spacetime. As we will see, the form of the allowed Lagrangian L can be highly constrained from very generic symmetry arguments. Like in any classical mechanics problem, the equations of motion can then be derived from the associated first order variational principle δ S=0 <cit.>. § ROTATIONAL DEGREES OF FREEDOM The two main questions one should ask for building an action are * What are the relevant degrees of freedom that should be introduced for describing a spinning body in curved spacetime ? * What are the symmetries under which the action should be invariant? This section aims to tackle the first of them. Let us denote z^μ(λ) the body's worldline. Here, λ is an arbitrary “time” parameter describing the evolution of the motion. We also define the four-velocity v^μ≜z^μλ. For any physical massive object, the four-velocity will be a timelike vector, v_μ v^μ<0. In the canonical language of Lagrangian mechanics, the four components v^μ will play the role of the velocities describing the position of the test body and associated to the coordinates z^μ. All along this discussion, the specific form of λ as well as the normalization of the four-velocity will be left arbitrary at the level of the action; they will only be constrained later at the level of the equations of motion, setting λ to be the proper time and consequently yielding the standard normalization v_μ v^μ=-1. We are left with the problem of choosing the degrees of freedom that will account for the rotational orientation of the test body. Following the proposition of Hanson and Regge for Special Relativity <cit.>, the spin (that is, the rotational) degrees of freedom of the body will be represented by an orthonormal tetrad eAμ(λ) rigidly attached to the body's worldline. Its orientation at any value of λ will be measured thanks to the introduction of another background orthonormal tetrad frame eAμ(x). At any point of the worldline, these two tetrads are related by a Lorentz transformation ΛAA(λ): eAμ(z(λ))=ΛAA(λ) eAμ(λ). Of course, we have the standard relations for Lorentz matrices ΛAA(λ)Λ_BA(λ)=η_AB, Λ_AA(λ)ΛAB(λ)=η_A B, and for tetrad frames eAμ(λ)e_Bμ(λ) =η_AB, eAμ(λ)e^Aν(λ)=g^μν(z(λ)), eAμ(x)e_Bμ(x) =η_A B, eAμ(x)e^Aν(x)=g^μν(x). The evolution of the body's tetrad will be described using the standard antisymmetric rotation coefficients Ω^μν (see e.g. <cit.>) DeAμ/λ≜-Ω^μν e_Aν ⇔ Ω^μν≜ e^AμD eAν/λ. Here and in the remaining of this text, we use the notation D/λ≜ v^α∇_α. The Lorentz matrices ΛAA(λ) encode all the informations regarding the orientation of the body's tetrad with respect to the background. As any homogeneous Lorentz transformation, they contain 6 degrees of freedom: three of them represent spatial rotations, and the three others relativistic boosts. Intuitively, one can see the three rotational degrees of freedom (DOFs) as being the spin ones, whereas the three boosts originate from the fact that one has not chosen yet the exact position of the worldline z^μ(λ) inside the body's worldtube. This ambiguity will be extensively discussed and resolved in Chapter <ref>, by enforcing a so-called spin supplementary condition (SSC). § CONSTRAINING THE ACTION It is now time to write down an action for our theory. It seems natural to require the following symmetry requirements to hold <cit.>: * Spacetime diffeomorphisms: as any GR scalar expression, the action should be invariant under any generic spacetime diffeomorphism x^μ→ x^μ'(x^μ). The Lagrangian should consequently be a tensorial scalar, in the sense that all the spacetime indices of the objects it is built from should be properly contracted between themselves; * Lorentz transformations: the action should be invariant under local Lorentz transformations, which transform the body and the background tetrad as eAμ→ΛABeBμ, eAμ→Λ̅ABeBμ. It amounts to require all the Lorentz indices of the tetrads to be properly contracted; * Reparametrization invariance: the time parameter λ being arbitrary, the action (<ref>) should be invariant under any reparametrization λ→λ'(λ) of the trajectory. In order to actually describe a spinning body, the Lagrangian should kinematically depend on the worldline velocity v^μ and on the rotation coefficients Ω^μν, but not on the “positions” (z^μ and eAμ) themselves, for the purpose of ensuring general covariance. Moreover, we forbid any dependence in the background structure eAμ, so that our description depends only on degrees of freedom intrinsic to the body. The prescribed Lagrangian should account for finite size effects, that is, dynamical effects originating from the coupling between the body's spin and the background's curvature. The later is accounted for by the Riemann tensor and its derivatives. Notice that the background metric g_μν is assumed to appear only for the purpose of contracting indices, thus allowing to construct scalars from the other tensorial objects in a natural way. We however forbid any dependence upon first derivatives of the metric (that is, upon Christoffel symbols). Derivatives of the metric are only allowed to enter in the action through the Riemann tensor and its derivatives. Excluding any coupling with other external fields and given the discussion above, the generic action for an extended test body is then assumed takes the form: S[z^μ,eAμ]=∫_γ L(v^μ,Ω^μν,g_μν(z),R_μνρσ(z),∇_λ R_μνρσ(z),…)λ. The subscript γ just refers to the fact that the integration over λ is actually an integration over the worldline γ. §.§.§ Homogeneity condition The next step will be to constrain the generic form of the action Eq. (<ref>). Actually, a very simple argument allows to provide a simple explicit – but still non-uniquely fixed – expression for the Lagrangian. As we have just mentioned, the action Eq. (<ref>) should be invariant under any reparametrization of the trajectory; in particular, it should be invariant under a scaling λ→Δλ (Δ≠ 0). This implies that the Lagrangian must be homogeneously linear in v^μ and Ω^μν, which both scale as Δ^-1 under this transformation. Euler's theorem on homogeneous functions[ Let f:ℝ^n→ℝ be a positively homogeneous function of degree k∈ℤ, i.e. ∀Δ>0:f(Δ x_1,…Δ x_n)=Δ^k f(x_1,…,x_n) which is continuously differentiable in some open subset 𝒰⊂ℝ^n. Then, k f(x_1,…,x_n)=∑_i=1^nx_ifx_i(x_1,…,x_n), ∀(x_1,…,x_n)∈𝒰. ] then implies that L(v^μ,Ω^μν,g_μν,R_μνρσ,∇_λ R_μνρσ,…)=Lv^μv^μ+LΩ^μνΩ^μν. Defining the conjugate momenta[The factor 2 in the definition of S_μν is present for consistency with the conventions used in the literature.] (respectively referred to as the linear momentum and the spin tensor) p_μ≜Lv^μ, S_μν≜ 2LΩ^μν, one can write L=p_μ v^μ+1/2S_μνΩ^μν. Be careful: we have not provided a unique expression for the Lagrangian of our theory. The momenta p_μ and S_μν remain here arbitrary functions of v^μ, Ω^μν, the Riemann tensor and its derivatives. They will be fixed when a spin supplementary condition will be enforced, which will provide us with an explicit relation between the linear momentum p_μ and the four-velocity v^μ. Nevertheless, the form of the Lagrangian (<ref>) is relevant to mention for later purposes (e.g. Hamiltonian description of the present problem); in particular, it is the same regardless to the finite size interactions allowed in the theory, i.e. regardless to the dependence of L in the Riemann tensor and its derivatives that is allowed. §.§.§ Quadrupole approximation We will now constrain our theory by restricting the functional dependency of the Lagrangian in the (derivatives of the) Riemann tensor. In the continuation of this text, we will always restrict ourselves to the so-called quadrupole approximation which consists into allowing L to depends on the Riemann tensor, but not on its derivatives: L=L(v^μ,Ω^μν,g_μν,R_μνρσ). More generically, allowing the Lagrangian to depend in the Riemann tensor up to its n^th derivative will lead to the appearance of 2^n+2-pole moments terms in the equations of motion. For an action of the form (<ref>), the multipole moments are only sourced by the spin of the body. For spin-induced moments, one can show (see Chapter <ref>) that the 2^n-pole moment scales as 𝒪(𝒮^n), where the spin magnitude 𝒮 is defined as 𝒮^2≜1/2S_μνS^μν. The aforementioned approximation makes sense, because (i) the spin magnitude will turn out to be a constant of motion, regardless to the multipole order we are working with and because (ii) 𝒮 can be assumed to be small in astrophysically realistic situations. It can consequently be used as an expansion parameter for setting up a perturbative treatment of the generic problem. From a physical viewpoint, the quadrupole approximation amounts to consider deformations induced by the proper rotation of the object in our description up to quadrupole order, while neglecting higher order corrections. Finally, let us notice that only the (mass) monopole moment p_μ and the (spin) dipole moment S_μν are dynamical variables, because they are the only multipole moments present in the explicit form of the Lagrangian (<ref>). The higher moments are non-dynamical and entirely written in terms of these two first moments. They will act as sources in the equations of motion. § EQUATIONS OF MOTION Before deriving the equations of motion, it is useful to explicit some results concerning the first-order variations of the Lagrangian. First, in the quadrupole approximation, a generic variation of L takes the form δ L=p_μδ v^μ+1/2S_μνδΩ^μν+Lg_μνδ g_μν-1/6J^μνρσδ R_μνρσ. Here, we have defined – up to a proportionality factor – the quadrupole moment J^μνρσ as the conjugate moment to the Riemann tensor: J^μνρσ≜ -6LR_μνρσ. Notice that the variation (<ref>) is independent of the explicit form (<ref>) of the Lagrangian. In particular, the action should be invariant under an infinitesimal change of coordinates z^μ→ z^μ+ϵξ^μ (ϵ≪ 1). Particularizing the variation (<ref>) to this case and working in a locally inertial frame yields δ_ξ L =ϵ(p^ν v^μ+SνλΩ^μλ-2Lg_μν+2/3J^ναβγRμαβγ)∂_μξ_ν. This variation must vanish regardless to the value of ξ^μ; therefore, the following constraint must hold: p_ν v_μ+S_νλΩμλ-2Lg_μν+2/3JναβγR_μαβγ=0. Taking the antisymmetric part of this expression allows to write S^λ[μΩν]λ=p^[μv^ν]+2/3R[μαβγJ^ν]αβγ, which is valid in any frame. This last relation will become extremely useful in the following derivations. §.§.§ Evolution equation for the spin tensor The evolution equation for the spin tensor, also known as the precession equation, is obtained by varying the action with respect to the body's tetrad eAμ. In the background frame, the variation of the rotation coefficients takes the form δΩ^A B=eAμeBνD δθ^μν/λ+ΩACδθ^C B-ΩBCδθ^C A. For convenience, the variation of the tetrad has been entirely expressed in terms of the object δθ^A B≜Λ^AAδΛAB. Plugging this result either in the explicit expression of the Lagrangian (<ref>) or in the generic variation (<ref>), one obtains δ_θ L=1/2(-D S^μν/λ+S^νρΩμρ-S^μρΩνρ)δθ_μν. Requiring this variation to be vanishing yields the evolution equation D S^μν/λ=S^νρΩμρ-S^μρΩνρ. Before going further on, let us stress some points useful for the continuation of this work. * A direct computations shows that the identity SρμΩ^μνS_νρ=0 holds. It implies that the spin magnitude is conserved, 𝒮λ=0. This conservation equation actually holds at any multipole order and is independent of the spin supplementary condition (see e.g. <cit.> and references therein). * In the object's frame, the evolution equation becomes D S^AB/λ=0. The components of the spin tensor in the object's frame S^AB are thus constant. This provides a posteriori a justification to the statement that the tetrad eAμ is “rigidly attached” to the compact object. * Using Eq. (<ref>), one can eliminate the dependence of the precession equation in the rotation coefficients and make explicit its dependence in the Riemann tensor. A straightforward computation yields D S^μν/λ=2 p^[μv^ν]+ℒ^μν, ℒ^μν≜4/3R[μαβγJ^ν]αβγ. MPD equation for the spin tensor This is the standard form of the precession equation that can be found in the literature. * Finally, contracting Eq. (<ref>) with v_ν, we remark that the momentum and the four velocity are not aligned anymore when spin is present, by contrast to the geodesic case: -v^2p^μ=𝔪 v^μ+p_⊥^μ, p_⊥^μ≜(-D S^μν/λ+ℒ^μν)v_ν. Here, 𝔪≜-v_α p^α denotes the body's mass in the frame attached to the worldline, also known as the kinetic mass. The norm of the four-velocity can be set to -1 if the time evolution parameter is chosen to be the body proper time. The orthogonal component of the momentum p^μ_⊥ can be expressed as a function of x^μ, v^μ and S^μν solely when a spin supplementary condition has been enforced, see Chapter <ref>. §.§.§ Evolution equation for the linear momentum The method for finding the evolution equation for the linear momentum is to vary the action with respect to the worldline. The procedure is the very same that the one which can be used for the derivation of the geodesic deviation equation <cit.>: let us consider an infinitesimal change of the worldline, parametrized by a displacement vector ξ^μ(λ) which is Lie-dragged along the worldline: ℒ_vξ^μ=0 ⇔ ξ^λ∇_λ v^μ=v^λ∇_λξ^μ. In this case, the variation of the action takes the form δ_ξ S=∫_γδ_ξ L λ=∫_γξ^λ∂_λ L λ=∫_γξ^λ∇_λ L λ. It is useful to notice that the following identities hold <cit.>: ξ^λ∇_λ v^μ v^μ =D ξ^μ/λ, ξ^λ∇_λΩ^μν =-D/λ(e^Aμδ_ξ eAν)+e^AμD δ_ξ eAν/λ-e^AνD δ_ξ eAμ/λ-ξ^α v^β Rμναβ. Here, we have defined δ_ξ eAμ≜ξ^λ∇_λ eAμ The value of this quantity is actually is arbitrary, since we are left with the freedom of choosing the way the tetrad is transported from the original worldline z^μ(λ) to the perturbed one z^μ(λ)+ξ^μ(λ). Choosing the tetrad to be parallelly transported between the two worldlines allows to set δ_ξ eAμ=0. Gathering all the previous pieces, the evolution equation for the linear momentum can then be derived – after integration by parts – from the variational problem δ_ξ S=0, yielding D p^μ/λ=-1/2Rμναβv^νS^αβ+ℱ^μ, ℱ^μ≜-1/6J^αβγδ∇^μR_αβγδ. MPD equation for the linear momentum This is the standard evolution equation of the linear momentum at quadrupole order. Together with Eq. (<ref>), these equations are the Mathisson-Papapetrou-Dixon equations, restricted to quadrupole order. At higher orders, the structure of the equations remains the same, the contribution of the higher order multipoles being only contained in the force ℱ^μ and torque ℒ^μν terms <cit.>. § STRESS-ENERGY TENSOR An interesting computation to be performed at this stage of the discussion is to write out the stress-energy tensor of the theory. It is advantageously computed from the variation of the action with respect to the body's tetrad frame: first expressing the metric in terms of the tetrad in the action thanks to Eq. (<ref>), the stress-energy tensor as be computed from T_μν≜1/√(-g)e_a(μδ S/δ eaν). The stress-energy tensor can be split as a sum over all the multipole orders involved: T_μν=T^pole_μν+T^dipole_μν+T^quad_μν. As usually when computing stress-energy tensors for point-like objects moving along worldlines, the action should be written as an integral over the spacetime by introducing a Dirac delta: S=∫[4]x√(-g)∫_γ L(v^μ,Ω^μν,e_AμeAν,R_μνρσ)δ_4(x,z)λ. Here, the symbol δ_4(x,z) stands for the diffeomorphism invariant Dirac distribution δ_4(x,z)≜δ^(4)(x-z)/√(-g), where δ^(4)(x-z) is the standard four-dimensional Dirac distribution <cit.>. After computation, the contributions of the right hand side are found to be given by T^pole_μν =∫_γ p_(μv_ν)δ_4(x,z)λ, T^dipole_μν =-∇_λ∫_γ Sλ(μv_ν)δ_4(x,z)λ, T^quad_μν =1/3∫_γ R(μαβγJ_ν)αβγδ_4(x,z)λ -2/3∇_λ∇_ρ∫_γ Jλ(μν)ρδ_4(x,z)λ. As expected, the stress energy tensor is not a function, but a distribution which is only non-vanishing on the body's worldline. CHAPTER: SKELETONIZATION OF THE STRESS-ENERGY TENSOR Until now, we have obtained the Mathisson-Papapetrou-Dixon equations governing the motion of spinning test bodies in curved spacetime at quadrupole order, which are given by Eqs. (<ref>) and (<ref>). This was performed through writing down an action principle for our theory and deriving the associated equations of motion from the associated variational principle. In this chapter, we will see that a totally different method allows to recover the very same equations of motion. It consists into replacing the smooth stress-energy tensor of the physical extended body by a distributional one, which is only supported on a single worldline encompassed in the body's worldtube. The equations of motion then follow from the stress-energy conservation equation. Justifying rigorously this approximation from first principles in GR is however very involved and technical. We refer the interested reader to the references mentioned in the introduction of Part <ref> for more details (especially Dixon ones). In the present text, we will give some insights about the coherence of this approximation by comparing the (involved) GR situation to the (simpler) Newtonian one. At quadrupole order, the computations associated to gravitational skeletonization turn out to be very cumbersome. This chapter aiming to provide a pedagogical introduction, we will restrict ourselves to the dipole order. Explicit computations for the quadrupole may be found in <cit.>. This chapter is organized as follows: Section <ref> reviews the gravitational skeletonization in Newtonian theory, which is then generalized to General Relativity in Section <ref>. As we will see, they are several decompositions that can be chosen for performing the skeletonization. In Section <ref>, we use the Ellis decomposition to recover the MPD equations at dipole order. The computations are carried out in a specific coordinates system, the adapted coordinates, which enable to reduce dramatically the length and the technicality of the derivation. § INVITATION: GRAVITATIONAL SKELETON IN NEWTONIAN GRAVITY This section is mainly based on <cit.>. In order to acquire some feeling about the form of the Ansatz of the GR's gravitational skeleton of the stress-energy tensor, let us have a look at the equivalent problem in Newtonian gravity. The gravitational potential U(t,𝐱) created by an object of mass density ρ(t,𝐱) enclosed in a volume 𝒱⊂ℝ^3 is solution of the Poisson equation Δ U(t,𝐱)=-4πρ(t,𝐱). This equation can be solved analytically, and its solution reads (up to a trivial additive constant) U(t,𝐱)=∫_𝒱[3]x' ρ(t,𝐱)/𝐱-𝐱'. This potential admit a convenient rewriting under the form of a multipole decomposition above an arbitrary point 𝐱_0∈𝒱. For any 𝐱∉𝒱, one can write U(t,𝐱)=∑_l=0^+∞(-)^l/l!I^i_1… i_l(t,𝐱_0)∂_i_1…∂_i_l𝐱-𝐱_0^-1 where we have defined the multipole moments I^i_1… i_l(t,𝐱_0)≜∫_𝒱[3]x (x-x_0)^i_1…(x-x_0)^i_lρ(t,𝐱). The proof of Eq. (<ref>) is easily carried out by performing a Taylor expansion of 𝐱-𝐱' with respect to 𝐱' above some point 𝐱_0∈𝒱. The gravitational skeletonization consists here in replacing the smooth mass density distribution ρ(t,𝐱) (which is supported on a finite-size region of space, supp ρ⊆𝒱) by a singular mass density distribution – say ρ_skel – which is supported on a single point of space, supp ρ_skel= 𝐱_0∈𝒱. The key result allowing such a skeletonization to be performed can be stated as follows: Let 𝐱_0 be a point of 𝒱. For any point 𝐱∉𝒱 outside of the object, the distributional mass density ρ_skel(t,𝐱)≜∑_l=0^+∞(-)^l/l!I^i_1… i_l(t,𝐱_0)∂_i_1…∂_i_lδ^(3)(𝐱-𝐱_0) generates the same potential U(t,𝐱) as the smooth mass density ρ(t,𝐱). It is enough to recall ourselves that the identity Δ𝐱-𝐱_0^-1=-4πδ^(3)(𝐱-𝐱_0). holds in the sense of distributions <cit.>. Of course, the explicit expression of ρ_skel and all the related equations have to be understood in the sense of distributions (see e.g. <cit.> for a clear reminder of the meaning of this assertion). In others words, when observed from outside, any localized gravitating object can be replaced by a particle located at a single point of spacetime and possessing an infinite tower of multipole moments. This replacement holds in the sense that the gravitational potentials generated by these two systems are identical as long as we remain outside of the object. § GENERAL RELATIVIST SKELETONS We now consider an extended body within the framework of General Relativity. This object is assumed to be described by a smooth stress-energy tensor supported on some worldtube 𝒯. In the same spirit, we replace its smooth stress-energy tensor T^μν(x) by a distributional stress-energy tensor T^μν_skel(x) supported on a single timelike worldline γ⊂𝒯. By analogy with Eq. (<ref>), we assume this stress-energy tensor to take the form T^μν_skel(x)=∑_l=0^+∞1/l!∫_γλ I^μνα_1…α_l(z)𝒟^(l)_α_1…α_lδ_4(x,z). Gravitational skeleton for the stress-energy tensor Here, 𝒟^(k)_α_1…α_l is some differential operator which contains at most l derivatives. z^μ(τ) are coordinates parametrizing the worldline γ with respect to an affine time parameter λ. We denote the tangent vector to the worldline v^μ=z^μλ. For l=0, we use the conventions 𝒟^(0)=Id and I^μνα_1…α_l=I^μν. At this level, the multipoles I^μνα_1…α_l are still arbitrary functions. §.§.§ Perturbative treatment One can show that it makes sense to treat the expansion (<ref>) perturbatively, and consequently to truncate it at any desired order. By analogy with the non-relativistic case, we do expect the multipole to scale as I^μνα_1…α_l∼μℓ^l, with μ and ℓ being respectively the typical mass and size of the extended body under consideration. Moreover, we expect the multiple covariant derivative to scale as ∇_α_1…α_l∼ r^-l, with r the typical curvature radius of the background metric. All in all, the l^th term of the expansion (<ref>) scales as μ(ℓ/r)^l. Then, if the object is assumed to be compact in the sense mentioned in the introduction, we have r≫ℓ and consequently ℓ/r≪ 1. The expansion (<ref>) can thus be truncated in a perturbative sense. A truncation at l=0 will correspond to the monopole approximation, l=1 to the pole-dipole (or simply dipole) one, l=2 to the quadrupole, etc. §.§.§ Dixon and Ellis representations To go further, we shall choose an explicit form for the operator 𝒟^l_α_1…α_l. Two equivalent choices have been used in the literature <cit.>. The most common is the Dixon representation <cit.> 𝒟^(k)_α_1…α_k ≜∇_α_1…∇_α_k. In this text, we will instead make another choice, the Ellis representation, defined by <cit.> 𝒟^(l)_α_1…α_l ={[ 0 if l<N; ∂_α_1…∂_α_N if l=l_max ]., with l_max the order in l at which the multipole expansion Eq. (<ref>) is chosen to be truncated. Both of these representations have advantages and drawbacks, which are nicely reviewed in <cit.>. For our purposes, it will be more convenient to work in the Ellis representation, since the computations involved at dipole order turn out to be less technical, and thus more suited for an introductory exposition. §.§.§ Reduction of the stress-energy tensor The above skeletonization amounts to replace the smooth object by a collection of multipole moments supported on a single worldline contained in the object worldtube. Actually proving the validity of the decomposition (<ref>) would require to show that there exists some expressions of the multipole moments I^μνα_1…α_l such that both the smooth and the distributional stress-energy tensors generate the same spacetime curvature though the Einstein field equations G_μν=8π T_μν. This task being extremely involved, we will not attempt to tackle it in the present text, but rather consider (<ref>) for granted. The interested reader would fruitfully refer to <cit.> for a more formal exposition of the subject. In this text, we will instead discuss the so-called reduction of the stress-energy tensor: as it is always the case in GR, our stress-energy tensor must be conserved and consequently obeys <cit.> ∇_μ T^μν=0. As we will show, this conservation equation highly constrains the form of the stress-energy tensor. At quadrupole order, one can show that it implies that there must exist a vector p^μ, and antisymmetric tensor S^μν and a rank four tensor J^μνρσ exhibiting the same symmetries than the Riemann tensor such that <cit.> T^μν(x) =∫_γλ [v^(μ p^ν)+1/3R(μαβγJ^ν)αβγ]δ_4(x,z)+∇_α∫_γλ v^(μS^ν)αδ_4(x,z) -2/3∇_α∇_β∫_γλ J^α(μν)βδ_4(x,z). Moreover, during the reduction process, we find additional constraints taking the form of differential equations for the quantities p^μ, S^μν and J^μνρσ that turn out to be precisely the MPD equations. Notice that Eq. (<ref>) agrees exactly with the form of the stress-energy tensor which has been found through the Lagrangian formulation, given in Eq. (<ref>). §.§ Some formalism: normal form and Tulczyjew's two theorems In order to derive the implications of the conservation equation (<ref>), it is useful to introduce first some formalism. In what follow, we will always work with tensors fields admitting the following type of decomposition: T^K(x)=∑_L=0^N∇_L∫_γ I^KL(y)δ_4(x,y)τ. Here, N∈ℕ, K and L stand for multi-indices and I^KL_L=0,…,N is a collection of multipole moments defined on a timelike worldline γ. We begin by defining the so-called normal form of such a tensor: [Normal form] A tensor T^K of the type (<ref>) is said to be in normal form if there exists a collection of multipole moments ℐ^KL_L=0,…,N such that T^K can be written as T^K(x)=∑_L=0^N∇_L∫_γℐ^KL(y)δ_4(x,y)τ and which are * symmetric with respect to the permutation of any two indices of the multi-index L; * orthogonal to v^α=y^ατ with respect to any index of the multi-index L. We refer to the moments ℐ^KL_L=0,…,N as the normal multipole moments of T^K. Being equipped with that definition, we can now state the two central results of this section, due to Tulczyjew <cit.>. [Tulczyjew first theorem] Let T^K (K∈ℕ) be a tensor of the type (<ref>). There exists a unique collection of multipole moments ℐ^KL_L=0,…,N allowing to put T^K in its normal form. Moreover, the ℐ^KL_L=0,…,N can be completely written in terms of the original multipoles I^KL_L=0,…,N. [Tulczyjew second theorem] Let T^K (K∈ℕ) be a tensor of the type (<ref>) and ℐ^KL_L=0,…,N its normal multipole moments. Then, for any K,N∈ℕ T^K(x)=0 ⇔ ℐ^KL(x)=0, ∀L=0,…,N. We will not carry out the explicit proofs of these theorems, since they are rather technical. The interested reader can consult <cit.> and references therein for more details. Notice that a generalization of Theorem <ref> was demonstrated in the recent paper <cit.>. Let us also mention that the uniqueness of the normal form stated in Theorem <ref> is a simple consequence of Theorem <ref>. The existence of the normal form can be either motivated in a general way or explicitly constructed for any particular values of K,N. In the next section, we will explicitly work out the normal form of the pole-dipole stress-energy tensor (K=(μν), N=1 case). §.§ Explicit reduction at pole-dipole order For performing the reduction of the pole-dipole stress-energy tensor, we start with the decomposition T^μν=∫_γ I^μνδ_4τ+∇_α∫_γ I^μναδ_4τ. Already in this simple approximation, we see that the stress-energy tensor contains 10+40=50 degrees of freedom, which is much more than the 4+10=14 degrees of freedom encompassed by the dynamical variables p^μ and S^μν of the MPD equations. As we will see, the additional degrees of freedom will be eliminated when requiring the conservation equation (<ref>) to hold. This equation takes here the form ∇_μ∫_γ I^μνδ_4λ+∇_μα∫_γ I^μναδ_4λ=0. The actual strategy we will follow to derive the consequences of this constraint will consists in two steps: (i) put all the contributions to the LHS of Eq. (<ref>) in normal form, which will enable us to (ii) use Tulczyjew second theorem <ref> to set all the normal multipole moments obtained to zero. *Normal form of the stress-energy tensor. As a warm-up exercise, we will construct the normal form of the stress-energy tensor (<ref>). Putting any tensor in its normal form will be possible thanks to a small number of “irreducible” operations that should be performed in a well-suited order that will be described below. We will treat the various terms of the tensors one by one. For convenience, we will underbrace with a subscript “NF” the terms being already in normal form. For instance, the first term of the RHS of Eq. (<ref>) is already in normal form: T^μν=∫_γ I^μνδ_4τ_NF+∇_α∫_γ I^μναδ_4τ. We will now consider the second term. The only task to be performed is here to make I^μνα orthogonal to the tangent vector with respect to the index α. In order to achieve this aim, we should perform the two following operations: * [OP 1] Introduce an orthogonal decomposition with respect to v^μ on the relevant indices. This is done thanks to the introduction of the projector ρ^α_β≜δ^α_β+v^α v_β. One has I^μνα=I^μνβρ_β^α-I^μνβv_β v^α≜ I^μνα̂-v^α I^μν v, with the notations I^μνα̂≜ρ^α_β I^μνβ and I^μν v≜ I^μναv_α. By construction, the first term of this decomposition is orthogonal to the tangent vector and thus in normal form: ∇_α∫_γ I^μναδ_4τ=∇_α∫_γ I^μνα̂δ_4τ_NF-∇_α∫_γ v^α I^μν vδ_4τ. * To deal with the second term, we shall [OP 2] use the following property: For any tensor T^K, ∇_α∫_γ T^Kv^αδ_4τ=∫_γṪ^Kδ_4τ, where we have introduced the notation Ṫ^K≜D T^K/τ≜ v^λ∇_λ T^K. This identity is rather easy to prove, see e.g. the clear discussion in <cit.>. It allows to write ∇_α∫_γ v^α I^μν vδ_4τ=∫_γİ^μν vδ_4τ._NF Gathering all the pieces together, we obtain T^μν =∫_γ(I^μν-İ^μν v)δ_4τ+∇_α∫_γ I^μνα̂δ_4τ._NF In other words, the normal multipole moments of the pole-dipole stress-energy tensor are given by ℐ^μν =I^μν-İ^μν v, ℐ^μνα =I^μνα̂. This procedure can be generalized to higher orders in the multipole expansion, see e.g. <cit.> for explicit computations at quadrupole level. *Normal form of the conservation equation. We now repeat the very same game for the LHS of the conservation equation (<ref>), which will then enable us to make use of Theorem <ref>. We also remind the reader that, because the stress-energy tensor is symmetric, the moments I^μν and I^μνα are also symmetric with respect to the indices μ and ν. The first term of Eq. (<ref>) can be put in normal form using [OP 1] and [OP 2], leading to ∇_μ∫_γ I^μνδ_4τ = -∫_γİ^vνδ_4τ+∇_μ∫_γ I^μ̂νδ_4τ._NF Regarding the second term, we first use two times [OP 1] and making use of the symmetric character of the stress-energy tensor, which allows to write ∇_μα∫_γ I^μναδ_4τ =∇_μα∫_γ(I^νμ̂α̂-v^μ I^ν vα̂-v^α I^νμ̂v+v^μ v^α I^ν vv)δ_4τ. The four terms of the RHS of this last equation will be treated independently: * To make the first term symmetric in the indices μ̂ and α̂, we will [OP 3] use the the decomposition ∇_μα=∇_(μα)+∇_[μα] and [OP 4] invoke the Ricci identities <cit.> to simplify the antisymmetric contribution. In this case, it yields ∇_μα∫_γ I^νμ̂α̂δ_4τ =1/2∫_γ Rνλμα I^λμ̂α̂δ_4τ+∇_μα∫_γ I^ν(μ̂α̂)δ_4τ. _NF We have made use of the Ricci identities under the form (which is valid in Ricci-flat spacetime): ∇_[μα]I^νμ̂α̂ =1/2∇_μ∇_αI^νμ̂α̂=1/2RνλμαI^νμ̂α̂. * Regarding the second term, we first [OP 5] commute the two covariant derivatives in order to be able to apply [OP 2]: -∇_μα∫_γ v^μ I^ν vα̂δ_4τ =-∇_αμ∫_γ v^μ I^ν vα̂δ_4τ-∇_μ∇_α∫_γ v^μ I^ν vα̂δ_4τ = -∫_γ Rνλμαv^μ I^λ vα̂δ_4τ_NF -∇_α∫_γİ^ν vα̂δ_4τ To reach this last line, we have made use of [OP 2] and [OP 4]. To put the second term in normal form, we [OP 6] make use of the identity For any tensor T^Kα, ∇_α∫_γṪ^Kαδ_4τ = ∫_γD/τ(T^Kβv̇_β)δ_4τ+∇_α∫_γ(Ṫ^Kβρ^α_β+v̇^α T^K v)δ_4τ. _NF The RHS is in normal form, since differentiating v_α v^α=-1 implies the orthogonality relation v_αv̇^α=0. The proof is performed by expliciting the projector ρ^α_β: ∇_α∫_γṪ^Kα̂δ_4τ =∇_α∫_γ(Ṫ^Kβρ^α_β+T^Kβρ̇^α_β)δ_4τ =∇_α∫_γ[Ṫ^Kβρ^α_β+T^Kβ(v̇^α v_β+ v^αv̇_β)]δ_4τ. □ We finally obtain -∇_μα∫_γ v^μ I^ν vα̂δ_4τ =-∫_γ[D/τ(I^ν vβv̂_β)+Rνλμαv^μ I^λ vα̂]δ_4τ_NF -∇_α∫_γ(İ^ν vβρ^α_β+v̂^α I^ν vv)δ_4τ. _NF * Regarding the third term, we again use subsequently [OP 2] and [OP 6], leading to -∇_μα∫_γ v^α I^νμ̂vδ_4τ =-∫_γD/τ(I^νβ vv̇_β)δ_4τ_NF -∇_μ∫_γ(İ^νβ vρ^μ_β+v̇^μ I^ν vv)δ_4τ._NF * Finally, for the fourth term, we use consecutively two times [OP 2], yielding ∇_μα∫_γ v^μ v^α I^ν vvδ_4τ=∫_γÏ^ν vvδ_4τ+∇_μ∫_γv̇^μ I^ν vvδ_4τ. _NF Gathering all the pieces together, the normal form of the conservation equation (<ref>) finally reads ∫_γ[Rνλμα(1/2I^λμ̂α̂-v^μ I^λ vα̂)-2D/τ(I^ν(vβ)v̇_β)+Ï^ν vv-İ^ν v]δ_4τ -∫_γ(2İ^ν(vβ)ρ^α_β+v̇^α I^ν vv-I^να̂)δ_4τ+∇_αβ∫_γ I^ν(α̂β̂)δ_4τ=0. *Independent constraints. The conservation equation being in normal form, one can apply Theorem <ref> to read out the independent constraints. We are left with Ï^ν vv-İ^ν v+Rνλμα(1/2I^λμ̂α̂-v^μ I^λ vα̂)-2D/τ(I^ν(vβ)v̇_β)=0, 2ρ^α_βİ^ν(vβ)+v̇^α I^ν vv-I^να̂=0, I^ν(α̂β̂)=0. We will now analyse these constraints, starting from the last one. Applying the orthogonal decomposition [OP 1] to Eq. (<ref>) implies I^ν̂(α̂β̂)-v^ν I^v(α̂β̂)=0. Contracting this equation with v_ν and reinserting the result in the original relation yields the two independent constraints I^μ̂(α̂β̂)=0 and I^v(α̂β̂)=0. Using the symmetry of I^μνα in its two first indices allows to reduce the first constraint to I^μ̂α̂β̂=0. We now turn to the second constraint, Eq. (<ref>). We apply the orthogonal decomposition [OP 1] to the index ν, which leads to, after relabelling the indices: I^μ̂ν̂-v^ν(I^μ̂v+2v̇_σ I^σ(μ̂v)-2ρ^μ_ρİ^v(ρ v))-2ρ^μ_ρρ^ν_σ I^σ(ρ v)=0. Contracting this equation with v_ν yields I^μ̂v+2v̇_σ I^σ(μ̂v)-2ρ^μ_ρİ^v(ρ v)=0. For later purposes, it is useful to rewrite this equation as I^μ̂v=v_σρ^μ_ρD/τ(I^σρ v+I^σ vρ̂-v^ρ I^σ vv). Inserting this result in the previous equation yields the independent constraint I^μ̂ν̂=2ρ^μ_ρρ^ν_σD/τI^σ(ρ v)=ρ^μ_ρρ^ν_σD/τ(I^σρ v+I^σ vρ̂-v^ρ I^σ vv). Finally, antisymmetrizing the free indices of this equation leave us with the following differential constraint ρ_ρ^μρ_σ^νİ^v[ρσ]=0. Introducing the spin tensor as S^μν≜ 2 I^v[μν]=2(I^v[μν̂]-v^ν I^v[μ v]), we can rewrite this equation as ρ^μ_ρρ^ν_σṠ^ρσ=0. As we will see later on, this is precisely the pole-dipole MPD equation describing the evolution of the spin tensor. Before going further on, let us remark that, using Eq. (<ref>) and the definition of the spin tensor, a few algebra allows to write the decomposed moments I^μ vv and I^vμ̂ν̂ solely in terms of the spin tensor: I^μ vv =S^μνv_ν, I^vμ̂ν̂ =1/2S^μν-v^[μS^ν]αv_α. The proof of these identities is not complicated and is left as an exercise for the interested reader. Finally, let us look at the first constraint, Eq. (<ref>). § ELLIS SKELETON IN ADAPTED COORDINATES: TO DIPOLE ORDER This section mainly follows the exposition of <cit.>. Performing the explicit reduction of the stress-energy tensor at quadrupole order is computationally quite involved, and can be found in <cit.>. As a proof of principle, we will instead perform the reduction at pole-dipole level and show that it leads to the MPD equations at the corresponding order. In the following, we will work with Ellis representation of multipoles. Explicitly, Ellis representation truncated at order N takes the form T^μν_skel(x) =1/N!∫_γλ I^μνα_1…α_N(z)∂_α_1…∂_α_Nδ_4(x,z) with z^μ=z^μ(λ) the coordinates of the worldline. Since T_skel^μν is a distributional stress-energy tensor, Eq. (<ref>) shall be formally understood in the sense of distributions: for any symmetric test function ϕ_μν, the quantity ∫_𝒯[4]x√(-g) T_skel^μν(x)ϕ_μν(x) is a real number. Integrating distributional quantities against test functions will be the main tool we will use to show that the conservation of the stress-energy tensor leads to the MPD equations. Notice that since T^μν is symmetric and since the partial derivatives commute, the moment I^μνα_1…α_N must obey the algebraic symmetries I^μνα_1…α_k=I^(μν)α_1…α_k=I^μν(α_1…α_k). Finally, let us comment on two drawbacks of Ellis representation: (i) from their very definitions, the moments I^μνα_1…α_N are not tensors, since they are contracted on their last N indices with partial derivatives, and since the total stress-energy tensor must be itself a tensorial object. Moreover, (ii) for a given order of truncation N, the full multipolar structure of the test object is represented by a single moment I^μνα_1…α_N. There is thus no a priori decomposition of this moment between a hierarchy of moments (monopole, dipole…). As we will see in the following, such a split can be obtained by introducing a specific system of coordinates, the so-called adapted coordinates. §.§.§ Ellis skeleton in adapted coordinates We now turn to coordinates adapted to the worldline, x^μ→ X^μ=(X^0,𝐗), with 𝐗≜(X^1,X^2,X^3). They are required to satisfy X^0_γ =λ, X^i_γ=0 ⇒ v^0_γ=X^0λ_γ=1, v^i_γ=X^iλ_γ=0. An example of explicit construction of this type of coordinates are Fermi normal coordinates <cit.>. In what follows, we will systematically assume that such coordinates can be constructed over the worldtube 𝒯 of the body. Using these coordinates, we can write ∫_𝒯[4]X√(-g) T_skel^μνϕ_μν =(-1)^N/N!∫_γλ I^μνα_1…α_N∂_α_1…∂_α_Nϕ_μν_γ =∑_k=0^N(-1)^N/N!N!/k!(N-k)!∫_γλ I^μν i_1… i_k0…0∂_i_1…∂_i_k∂^N-k_0ϕ_μν_γ =∑_k=0^N(-1)^k/k!(N-k)!∫_γλ ∂^N-k_0 I^μν i_1… i_k0…0∂_i_1…∂_i_kϕ_μν_γ. It is therefore useful to define a new collection of moments γ_(N)^i_1… i_k≜1/(N-k)!∂^N-k_0 I^μν i_1… i_k0…0, k≤ N. They still satisfy the algebraic symmetries γ_(N)^μν i_1… i_k=γ_(N)^(μν) i_1… i_k=γ_(N)^μν (i_1… i_k). In terms of the new moments (and still in adapted coordinates), Ellis decomposition is equivalently given by T^μν_skel=1/√(-g)𝒯^μν, 𝒯^μν≜∑_k=0^N1/k!γ_(N)^μνi_1…i_k(λ)∂_i_1…∂_i_kδ^(3)(𝐗). Ellis decomposition in adapted coordinates Notice that since T^μν_skel is a tensor, 𝒯^μν is a tensor density of weight -1. The proof of Eq. (<ref>) consists into integrating this expression against an arbitrary, symmetric test function ϕ_μν: ∫_𝒯[4]X√(-g) T_skel^μνϕ_μν =∑_k=0^N1/k!∫_γλ∫[3]X γ_(N)^μν i_1… i_k∂_i_1…∂_i_kδ^(3)(𝐗)ϕ_μν =∑_k=0^N(-1)^k/k!∫_γλ γ_(N)^μν i_1… i_k∂_i_1…∂_i_kϕ_μν. We therefore recover the expression obtained in Eq. (<ref>) using the definition Eq. (<ref>). Before turning to the derivation of the MPD equations using the conservation of the stress-energy tensor, let us make a couple of remarks. As we will check explicitly at the dipole level, the multipoles γ are fully determined by the distribution of stress-energy, whereas the Is are not, thus leading to the appearance of a gauge freedom in their definition. This can be seen from Eq. (<ref>), since the construction of the Is from the γs require to integrate with respect to time, thus leading to the appearance of arbitrary integration constants. Moreover, as we will see when discussing the relation between distributional and smooth stress-energy tensors, the split of I^μνα_1…α_N in k moments γ^μν i_1… i_k actually corresponds to a physical split between a monopole, a dipole, etc. Notice that the price to pay for obtaining simple computations in Ellis representation is that we are forced to work in a specific coordinate system, the adapted coordinates. Moreover, the multipole decomposition tuned to this system is not explicitly covariant. Nevertheless, this framework allows to recover the results obtained with more involved approaches, e.g. Dixon representation. Finally, notice that a coordinate-free approach to multipoles can be formulated, see <cit.> for more details. §.§.§ Conservation equation for stress-energy tensor density In adapted coordinates, since the decomposition Eq. (<ref>) is valid, it is more convenient to write the conservation of the stress-energy tensor Eq. (<ref>) in terms of the tensor density 𝒯^μν introduced in Eq. (<ref>). Since T^μν_skel is a tensor, the covariant derivative appearing in Eq. (<ref>) can be expanded in terms of partial derivatives and Christoffel symbols. We get ∇_μ T^μν_skel =∂_μ T^μν_skel+2Γ^(μ_νρT^ν)ρ_skel =∂_μ(1/√(-g))𝒯^μν+1/√(-g)(∂_μ𝒯^μν+2Γ^(μ_νρ𝒯^ν)ρ) =1/√(-g)(∂_μ𝒯^μν+Γ^ν_μρ𝒯^μρ). The last equality uses the identity ∂_μ(√(-g))=√(-g)Γ^α_μα. At the end of the day, the conservation equation for the stress-energy tensor Eq. (<ref>) is equivalent to the following equation for the stress-energy tensor density ∂_μ𝒯^μν+Γ^ν_μρ𝒯^μρ=0. §.§.§ MPD equations in adapted coordinates Before investigating the consequences of this conservation equation, it is useful to write down the form that the MPD equations take in adapted coordinates. Since we will derive only the pole-dipole equations in next section, we set the force and torque terms to zero in all the equations below. Recalling that v^μ=δ^μ_0 on the worldline, the evolution equation for the spin (<ref>) takes the form ∇_0 S^μν=2p^[μv^ν]. Separating the spatial coordinates from the temporal one, it is equivalent to the two following equations ∇_0 S^0i =-p^i, ∇_0 S^ij=0. Using the definition of the Riemann tensor, the evolution equation for the momentum (<ref>) takes the form ∇_0 p^μ =-1/2Rμ0αβS^αβ =-(∂_αΓ^μ_0β+Γ^μ_αλΓ^λ_0β)S^αβ =-Γ̇^0_0μS^0μ-∂_iΓ^0_0μS^iμ-Γ^μ_αλΓ^λ_0βS^αβ, where we use the notation ≜∂_0. §.§.§ Recovering MPD equations from stress-energy conservation We will now truncate the multipole expansion to dipole order, and study the constraints enforced by the conservation equation (<ref>). At dipole order, Ellis decomposition takes the form 𝒯^μν=γ^μνδ^(3)(𝐗)+γ^μν i∂_iδ^(3)(𝐗) To lighten the notations, we drop the subscript “(N)” from the multipoles γ in the continuation of this section. For such a choice of stress-energy density, the conservation equation (<ref>) is a distributional identity. It will be satisfied provided that ∫[3]X (∂_μ𝒯^μν+Γ^ν_μρ𝒯^μρ)ϕ_ν=0 for any arbitrary test function ϕ_ν. Inserting the decomposition Eq. (<ref>) in this equation and integrating by parts yields (γ̇^0ν+Γ^ν_μργ^μρ-∂_iΓ^ν_μργ^μρ i)ϕ_ν -(γ̇^0ν i+γ^iν+Γ^ν_μργ^μρ i)∂_iϕ_ν+γ^jν i∂_i∂_jϕ_ν=0 Because this identity shall be valid for any choice of test function ϕ_ν, it is equivalent to the set of three equations γ^ν (ij)=0, γ̇^0ν i+γ^iν+Γ^ν_μργ^μρ i=0, γ̇^0ν+Γ^ν_μργ^μρ-∂_iΓ^ν_μργ^μρ i=0. We will now prove that this set of constraints is consistent with setting γ^μν =p^(μv^ν)+v^ρΓ^(μ_ρσS^ν)σ +∂_0(v^(μS^ν)0 ), γ^μν i =v^(μS^ν) i in adapted coordinates, where p^μ is a vector and S^μν an antisymmetric tensor. Notice that these relations can be inverted as S^0i =γ^00i, S^ij =2γ^0ij, p^μ =γ^μ0-Γ^μ_0iγ^00i. The two first identities are straightforward to prove, whereas the latter is a little bit more involved. First, remark that one can write Ṡ^μ0=p^μ-p^0δ^μ_0+Γ^μ_0ρ S^0ρ+Γ^0_0ρS^ρμ. We then obtain γ^μ0 =1/2(p^μ+p^0δ^μ_0+Γ^μ_0ρS^0ρ+Γ^0_0ρS^μρ+Ṡ^μ0) =p^μ+Γ^μ_0ρS^0ρ =p^μ+Γ^μ_0iγ^00i, which completes the proof. We will now plug the decomposition Eq. (<ref>) into the constraint equations. Eq. (<ref>) is automatically satisfied, since S^μν is an antisymmetric tensor. Eq. (<ref>) reduces to δ^(ν_0[Ṡ^0)i+Ṡ^i)0+p^i)]+Γ^(i_0ρS^0)ρ+Γ^ν_0ρS^ρ i=0. Setting ν=j, we are left with Ṡ^ij+Γ^i_μ 0S^μ j+Γ^j_μ 0S^iμ=0 ⇔ ∇_0S^ij=0. For ν=0, we get Ṡ^0i+Γ^0_μ 0S^μ i+Γ^i_μ 0S^0μ+p^i=0 ⇔ ∇_0 S^0i=-p^i. These two equations are precisely the spin evolution equation in adapted coordinates, given by Eq. (<ref>). The last constraint Eq. (<ref>) is the most involved to deal with. It reads ṗ^(0δ^ν)_0+∂_0(Γ^(0_0σS^ν)σ)+∂_0(δ^(0_0Ṡ^ν)0)+Γ^ν_μ0(p^μ+Ṡ^μ0) +Γ^ν_μαΓ^μ_0βS^αβ-∂_iΓ^ν_0ρ S^ρ i=0. For ν=0, direct algebra leads to ∇_0 p^0 =-Γ̇^0_0μS^0μ-∂_iΓ^0_0μS^iμ-Γ^0_μαΓ^μ_0βS^αβ, which precisely reproduce the component μ=0 of Eq. (<ref>). For ν=i, one shall use Eq. (<ref>) to replace the term 1/2∂_0Ṡ^i0. After some algebra, we find ∇_0 p^i =-Γ̇^i_0μS^0μ-∂_jΓ^i_0μS^jμ-Γ^j_μαΓ^μ_0βS^αβ, which again agrees with Eq. (<ref>). §.§.§ Comparison with stress-energy tensor from Lagrangian formulation Let us summarize our findings. We have proven that, in adapted coordinates, the stress-energy conservation equation (<ref>) enforces the pole-dipole stress energy tensor in Ellis formulation to be given by the decomposition Eq. (<ref>), where p^μ and S^μν satisfy the pole-dipole MPD equations, and can thus be identified to the linear momentum and the spin dipole of the body. In last chapter, we also found an explicit form for the stress-energy tensor in terms of the momentum and the spin, given by Eq. (<ref>). It is however not obvious that the two formulation agree one with another. To prove this, one can integrate the Lagrangian stress-energy tensor Eq. (<ref>) truncated at dipole order against a test function, still working in adapted coordinates: ∫[4]X√(-g)∫_γλ [p^(μv^ν)δ_4(X,Z)-∇_λ(S^λ(μv^ν)δ_4(X,Z))]ϕ_μν =∫_γλ p^(μv^ν)ϕ_μν+∫_γλ S^λ(μv^ν)∇_λϕ_μν =∫_γλ [p^(μv^ν)-S^λ(μΓ^ν)_λρv^ρ-∂_0(S^0(μv^ν))]ϕ_μν+∫_γλ S^i(μv^ν)∂_iϕ_μν =∫_γλ γ^μνϕ_μν-∫_γλ γ^μν i∂_iϕ_μν, where all the quantities are understood to be evaluated on the worldline when integration over spacetime has been dropped. The last line precisely agrees with Ellis decomposition in adapted coordinates given in Eq. (<ref>), which proves the equivalence of the two frameworks. §.§.§ Link with smooth stress-energy distribution A last nice feature of Ellis representation in adapted coordinates is that it allows to easily relate the distributional stress-energy tensor T^μν_skel to the smooth, physical stress-energy tensor T^μν it represents. Since we are interested in compact objects, we can always assume that the stress-energy tensor has compact support on the spatial slices defined in adapted coordinates. We can then define a one parameter family of regular “squeezed stress-energy tensors”, given by T_ϵ^μν(λ,𝐗)≜1/ϵ^3T^μν(λ,𝐗/ϵ). For ϵ≪ 1, one has T^μν_ϵ(λ,𝐗)=γ̃^μν(λ)δ^(3)(𝐗)+ϵγ̃^μν i(λ)∂_iδ^(3)(𝐗)+𝒪(ϵ^2), with γ̃^μν(λ)=∫[3]X√(-g) T^μν(λ,𝐗), γ̃^μν i(λ)=∫[3]X√(-g) X^i T^μν(λ,𝐗) The proof proceeds by a simple change of variables W^i=X^i/ϵ and a Taylor expansion around 𝐗=0: ∫[4]X√(-g) T^μν_ϵ(λ,𝐗)ϕ_μν(λ,𝐗) =∫_γλ∫[3]W√(-g) T^μν(λ,𝐖)ϕ_μν(λ,ϵ𝐖) =∫_γλ∫[3]W√(-g) T^μν(λ,𝐖)ϕ_μν(λ,0) +ϵ∫_γλ∫[3]W√(-g) T^μν(λ,𝐖)W^i∂_iϕ_μν(λ,0)+𝒪 (ϵ^2), which naturally leads to the result announced. This result is easily extended to higher orders. The moments γ̃ have now the real physical interpretation of a mono-pole, a dipole, etc. Notice that if we consider stress-energy distributions representing compact objects, there is no need to introduce an expansion parameter, since the moment γ̃^μν i_1… i_k will naturally scales as μ(ℓ/r)^k, with ℓ/r≪ 1, as discussed in the previous section. We can thus formally set ϵ=1 in the development above, the consistency of the truncation of the Taylor series being granted by the existence of this new small parameter. In this case, the squeezed stress-energy tensor is equal to the physical one, and γ̃^μν i_1… i_k=γ_(N)^μν i_1… i_k. For compact objects, there is thus a natural relationship between the distributional moments and the smooth, physical stress-energy tensor. More concretely, we can provide expressions for the “physical” moments using Eq. (<ref>). For the spin tensor, one has S^0i =γ^00i =∫[3]X√(-g) X^iT^00 S^ij =2γ^0ij =2∫[3]X√(-g) X^iT^j0 =∫[3]X√(-g)(X^i T^j0-X^j T^i0). Gathering these two results, we find back the formula mentioned in the introduction, S^μν=∫_X^0=cst[3]X√(-g)(δX^μT^ν0-δX^νT^μ0), Spin tensor from the stress-energy tensor since, in adapted coordinates, δ X^0=0 and δ X^i=X^i. The same game can be played for the momentum p^μ. We obtain p^μ =∫[3]X√(-g) T^μ0-Γ^μ_0νS^0ν. Linear momentum from the stress-energy tensor In flat spacetime, this equation reduces to the standard result p^μ=∫_X^0=cst[3]X√(-g)T^μ 0. We end here this discussion. The main point to highlight is that Ellis skeletonization in adapted coordinates has allowed to provide a more concrete viewpoint on physical significance of the linear momentum and the spin tensor, which can now be understood as “physical” multipole moments of the smooth stress-energy tensor. CHAPTER: SPIN SUPPLEMENTARY CONDITIONS Let us look back at what has been achieved so far. In Chapter <ref>, we have derived the explicit form of the equations of motion for extended test bodies through the Lagrangian formulation, whereas Chapter <ref> has allowed us to gain more insights about the physical meaning of the dynamical variables p^μ, S^μν…We have thus now in our possession ten differential equations for describing the motion of test bodies in curved spacetime, namely the MPD equations (<ref>) and (<ref>). However, the motion of a test body is described by fourteen dynamical quantities v^μ, p^μ and S^μν=S^[μν]. The system of equations is consequently not closed, and we are left with four extra dynamical quantities. One extra condition can be enforced by setting the time parameter to the proper time of the worldline, λ=τ, yielding v_μ v^μ=-1. Nevertheless, we are still left with three missing constraints. In both description studied so far, these missing constraints carry a clear physical interpretation: * From the Lagrangian viewpoint, they arise from the fact that the orientation of the body is entirely described by the means of the rotational degrees of freedom of the Lorentz matrix appearing in Eq. (<ref>). As we shall see in Section <ref>, the boost degrees of freedom turn out to be redundant with the linear momentum p^μ, which is expected since this latter quantity already describes the evolution of the position of the “center” of the body along the worldline. Therefore, allowing for degrees of freedom boosting the center position does not add any new physical degree of freedom to the system. These boost degrees of freedom are gauge degrees of freedom, and they can be fixed without affecting the physical state of the system. * From the skeletonization perspective, we have not yet specified along which worldline (belonging to the body worldtube) the multipole moments were defined. This fact accounts for the degrees of freedom that remain to be specified. A natural choice is to set the worldline to describe the evolution of the position of the body's center of mass (COM) at each instant of the time evolution. However, as we will review in Section <ref>, the relativistic notion of center of mass is observer-dependent, and we shall therefore specify with respect to which observer our choice has been made. In both cases, we have three unnecessary degrees of freedom, which can be fixed by enforcing an algebraic relation of the form 𝒱_μ S^μν=0, where 𝒱^μ is some normalized timelike vector. Such a constraint is known as a covariant spin supplementary condition (SSC). This chapter will be devoted to the study of such constraints, and to the understanding of their emergence from the physical prescriptions discussed above. Remark that Eq. (<ref>) actually contains only three independent constraints, since 𝒱_μ𝒱_ν S^μν=0 follows automatically from the antisymmetry of the spin tensor. This chapter is structured as follows: Section <ref> will review the problem of center of mass in Special Relativity. Section <ref> and <ref> will respectively discuss spin supplementary conditions from the skeleton and the Lagrangian perspectives. In Section <ref>, we will relate the presence of SSCs to an orthogonal split of the spin tensor with respect to the timelike direction 𝒱^μ. In particular, this will enable us to show that, when a SSC has been enforced, one can always express the content of the spin tensor in terms of a single spin vector. Section <ref> will discuss the main SSCs that have been studied in the literature, that is the various choices of 𝒱^μ that can be made. § INVITATION: CENTER OF MASS IN SPECIAL RELATIVITY The notion of center of mass becomes an observer dependent notion in relativity. This will be crucial for understanding the status of SSCs from the perspective of gravitational skeletonization (fixation of the worldline). We will illustrate this fact in the simpler framework of Special Relativity (SR). Consider a spinning body in SR described by a center position x^μ and an intrinsic (spin) angular momentum S^μν. Its linear momentum is denoted p^μ. The total angular momentum of the system is <cit.> J^μν=x^μ p^ν- x^ν p^μ+S^μν. Because of the Poincaré invariance of the system, J^μν and p^μ should be conserved with respect to Poincaré transformations. However, the choice of the center x^i is a priori arbitrary. Therefore, for a change of center position x^i→ x^i+δ x^i, one shall have the transformation rules S^ij→ S^ij+δ x^ip^j-δ x^j p^i, S^0i→ S^0i+δ x^i E with E≜ p^0 the energy of the body. From the transformation rule Eq. (<ref>), we can make two observations: (i) the components S^0i of the spin tensor take the interpretation of the mass dipole of the object, relative to the center x^i. (ii) Unlike in classical mechanics, the center of mass (that is, the choice of center for which the mass dipole vanishes) becomes an observer dependent notion, as graphically depicted in Figure <ref>. However, because of the transformation rule Eq. (<ref>), one can always choose the center x^i such that the mass dipole vanishes for one specific timelike observer of velocity 𝒱^μ. These observer-dependent COMs are sometimes referred to as centroids in the literature. In its proper frame, the four-velocity of an observer becomes 𝒱^μ̂=(1,0). Therefore, the condition S^0̂î=0 can be covariantly written 𝒱_μ S^μν=0, which correspond to the generic canonical form of the generic covariant SSC Eq. (<ref>). § SSCS IN THE SKELETON FORMULATION: FIXATION OF THE WORLDLINE Having gained some intuition from SR, let us now turn back to our original problem. In the skeleton formulation of test bodies, we want to fix the representative worldline of the body. The most physically meaningful way of proceeding is to choose the worldline tight to a given timelike centroid. As we will see, this still corresponds to choose S^0̂μ̂ to be vanishing in the proper frame of the timelike observer of four-velocity 𝒱^μ characterizing the centroid. Actually, from the multipole decomposition of the spin tensor Eq. (<ref>), we can write S^0μ =-∫_x^0=cst[3]x√(-g) (x^μ-x^μ_c) T^00 =x^μ_c∫_x^0=cst[3]x√(-g) T^00-∫_x^0=cst[3]x√(-g) x^μ T^00 =x^μ_c(p^0+Γ^0_0νS^0ν)-∫_x^0=cst[3]x√(-g) x^μ T^00, where the last equality follows from Eq. (<ref>). Now, asking that S^0̂μ̂=0 in the proper frame of a given observer amounts to set x^μ̂_c=1/p^0̂∫_x^0̂=cst[3]x̂√(-g) x^μ̂T^0̂0̂ in that frame, i.e. to set the worldline x_c^μ̂ to be the center of mass of the body with respect to that observer. For the very same reason as in SR, the condition S^0̂μ̂=0 can be written covariantly under the form Eq. (<ref>). We have therefore reached the following conclusion: enforcing a given covariant SSC Eq. (<ref>) amounts to set the representative worldline of the body to be its center of mass with respect to a timelike observer of four-velocity 𝒱^μ. § SSCS IN THE LAGRANGIAN FORMULATION: SPIN GAUGE SYMMETRY We now turn to the analysis of spin supplementary conditions from the perspective of Lagrangian formulation. This discussion will be of prime relevance for turning to the Hamiltonian formulation, as will be undertaken in Part <ref> of this thesis. As already discussed in Chapter <ref> for the geodesic case, turning to the Hamiltonian description first requires to check whether all the momenta are independent or not, that is if the system is subjected to constraints. The mass-shell constraint will be discussed in Chapter <ref>, we will here focus on the spin degrees of freedom contained in the Lorentz matrix[From now on in this thesis, we drop the underlined indices for the sake of lisibility of the present discussion.] ΛAB. One can show that the degrees of freedom implemented in the Lorentz matrix are not all independent from the linear momentum p^μ. Since we have the freedom to perform local Lorentz transformations without affecting the physical state of the system, let us consider the specific transformation LAB≜δ^A_B-2p̂^AΛ_0B-w^A w_B/p̂_C w^C, where p̂^A≜ p^A/μ and w^A≜p̂^A+ΛA0. Under this transformation, the ΛA0 components of the matrix transform as ΛA0 → LABΛB0=ΛA0+2p̂^A-w^A=p̂^A. The other components transform as ΛAi→ΛAi-ΛBip_B w^A/p_Cw^C. Therefore, using the freedom of performing local Lorentz transformations, we were able to bring the ΛA0 components of the Lorentz matrix to p̂^A=eAμp̂^μ. They are consequently redundant degrees of freedom and yield to the presence of constraints when turning to Hamiltonian formulation. As we will discuss in more details after having established the symplectic structure on phase space for spinning test bodies (see Chapter <ref>), the corresponding constraints will read ϕ^μ≜ w_α S^αμ=(p̂_α+Λ_0α)S^αμ≈ 0. This is very similar to the spin supplementary condition Eq. (<ref>). Different SSCs will correspond to different choices of Λ_0α. Moreover, these constraints turn out to be first class, and the redundant degrees of freedom that we want to get rid of by enforcing a specific SSC are genuine gauge degrees of freedom of the system <cit.>. Working under a specific SSC will correspond to work within a specific choice of gauge fixation, or, from the Hamiltonian perspective, on a given constraint surface lying in the phase space. As already mentioned, more details will be provided in Chapter <ref>. § SSCS AND ORTHOGONAL DECOMPOSITION OF THE SPIN TENSOR §.§.§ Orthogonal decomposition One can further explore the structure obtained when a covariant SSC has been enforced as follows. Let 𝒱^μ be a timelike unit vector. In full generality, we can perform an orthogonal decomposition of the spin tensor with respect to that direction, and write <cit.> S^μν =-ϵ^μνρσ𝒱_ρ S_σ+2 D^[μ𝒱^ν]. This relation inverts as S^μ=1/2ϵ^μνρσ𝒱_ν S_ρσ, D^μ=𝒱_α S^αμ. Notice that 𝒱^μ is orthogonal to both S^μ and D^μ. Now, identifying the vector 𝒱^μ with the one of Eq. (<ref>) shows that enforcing a covariant spin supplementary condition amounts to set D^μ=0 in the decomposition Eq. (<ref>). Therefore, when a covariant spin supplementary condition is enforced, the spin tensor is solely described in terms of a spin vector S^μ. Moreover, the vector D^μ takes the natural interpretation of the mass dipole of the system. One can pass from one description to another thanks to S^μ=1/2ϵ^μνρσ𝒱_ν S_ρσ ⇔ S^μν=-ϵ^μνρσ𝒱_ρ S_σ. §.§.§ SSC in adapted tetrad frame Any covariant spin supplementary condition takes a very simple form when the spin tensor components are expressed in a well-chosen background tetrad. Consider a background tetrad eAμ whose timelike leg is given by e0μ=𝒱^μ. This does not fix the tetrad uniquely, since its spatial legs can still be defined up to an arbitrary 𝖲𝖮(3) transformation. In this tetrad, the components of the spin tensor are defined as S^AB=eAμeBνS^μν. A direct consequence of the SSC Eq. (<ref>) is that S^0A=𝒱_μeAνS^μν=0. Therefore, when a covariant SSC is enforced, the only non-vanishing “adapted tetrad” components of the spin tensor are the purely spatial ones, S^IJ (with I,J=1,2,3). The tetrad components of the vectors D^μ and S^μ read D^A =(0,𝐃), 𝐃≜(S^10,S^20,S^30), S^A =(0,𝐒), 𝐒≜(S^23,S^31,S^12). Therefore, enforcing a covariant SSC amounts to set 𝐃=0 in this formulation. Notice that the spacetime norms of the vectors S^μ and D^μ coincide with the (euclidean) norms of the three-vectors 𝐒 and 𝐃, S^μ S^ν g_μν= S^I S^Jδ_IJ≜𝐃^2, S^μ S^ν g_μν= D^I D^Jδ_IJ≜𝐃^2. §.§.§ The two Casimir invariants As will be found when studying Hamiltonian formulation of test bodies, there exist two Casimir invariants 𝒮^2≜1/2S_μνS^μν, 𝒮_*^2≜1/8ϵ_μνρσS^μνS^ρσ, whose Poisson brackets with any other dynamical variable vanish. Notice that 𝒮^2 is precisely the spin magnitude introduced before. Their explicit expressions become particularly enlightening when expressed in terms of the tetrad components of the vectors S^A and D^A. Straightforward algebra allows to show that 𝒮^2 = S_0AS^0A+1/2S_IJS^IJ=𝐒^2-𝐃^2, 𝒮_*^2 =𝐒·𝐃. Here, the dot stands for the usual euclidean scalar product. When a covariant SSC is enforced, the Casimir invariants reduce to 𝒮^2=𝐒^2=S_μ S^μ, 𝒮^2_*=0. Therefore, 𝒮^2 takes the simple interpretation of being the norm of the spin vector, whereas 𝒮_*^2 is set to zero when a SSC is enforced. § REVIEW OF THE VARIOUS SSCS In this final section, we will review the main covariant SSCs studied in the literature, i.e. the possible choices of vector 𝒱^μ in Eq. (<ref>). It is interesting to wonder how many independent degrees of freedom we are left with. This is a priori not clear, since the vector 𝒱^μ is unspecified and can include additional degrees of freedom. This problem should be discussed separately for each specific spin supplementary condition. A brief review of each SSC is provided below, more details may be found in the references mentioned in the text or in the more recent papers <cit.>. A pictorial overview is provided in Table <ref>. §.§.§ Corinaldesi-Papapetrou (CP) and Newton-Wigner (NW) These two SSCs can be written together as (ξ_μ+αp̂_μ)S^μν=0, Corinaldesi-Papapetrou/Newton-Wigner spin supplementary conditions where α=0 for the CP condition <cit.> and α=1 for the NW one <cit.>. Here, ξ^μ is an external timelike vector field that shall be specified. This kind of SSC amounts to choose some “laboratory frame” characterized by ξ^μ for expressing the multipole moments of the objects. We are left with two independent degrees of freedom in the spin tensor. The main advantage of this SSC is that it allows to formulate the motion under the form of a coordinate-time Hamiltonian system whose spin vector satisfies the canonical 𝖲𝖮(3) algebra <cit.>. Its main drawback – which will make it useless for our purposes – is that it is not compatible with a covariant Hamiltonian formulation, since it is not preserved by the covariant Poisson bracket structure that we will derive in the next part of the thesis. §.§.§ Mathisson-Pirani (MP) The Mathisson-Pirani <cit.> SSC reads v_μS^μν=0. Mathisson-Pirani spin supplementary condition Unlike the CP/NW conditions, it does not not depends upon exterior structures. It corresponds to compute the moments in a frame 𝒱^μ∝ v^μ with respect to which the COM of the body is at rest. However, the spin tensor contains now four independent degrees of freedom, since the MP condition can be shown to hold provided that S^μν is degenerate in some timelike direction. Moreover, using this SSC, many initial data can give rise to the same physical situation, see <cit.> for a detailed discussion. §.§.§ Kyrian-Semerák (KS) The KS condition provides a SSC for which the linear momentum is proportional to the four-velocity, p^μ=μ v^μ. It reads w_μS^μν=0 Kyrian-Semerák spin supplementary condition where w^μ is an arbitrary unit timelike vector which is parallel-transported along the worldline, D w^μ/λ=0. Physically, it corresponds to first choose an arbitrary frame for computing the moments, and then parallel transport this frame along the worldline. The spin tensor is left with four independent degrees of freedom. Arbitrariness of the frame can be a problem, see <cit.> for details. §.§.§ Tulczyjew-Dixon (TD) The TD condition reads <cit.> p_μS^μν=0. Tulczyjew-Dixon spin supplementary condition Using this SSC, the components of the spin tensor and the linear momentum are not anymore all linearly independent, thus adding less dimensions to the spin sector of the phase space. This is the spin supplementary condition that we will use in the continuation of this thesis, since it is the one that provide the greatest number of advantages for our purposes, see Table <ref>. The mains benefits of using TD SSC were nicely summarized by P. Ramond <cit.>. We briefly summarize the main ones here: * TD SSC is compatible with covariant Hamiltonian formulation (like the MP and KS ones, but unlike the CP/NW one), which will be of prime importance for studying the associated Hamilton-Jacobi equation in Chapter <ref>; * It does not require any background or external structure in its formulation, but only depends upon the intrinsic properties of the body; * It reduces the number of degrees of freedom by the largest amount. We are left with only two DOFs for parametrizing the spin tensor, as for the CP/NW SSCs; * TD condition is the only SSC that was proven to allow to define a unique center of mass worldline for the test body, see <cit.>; * It provides us with a dynamical mass μ which is conserved at pole-dipole order; * Finally, it is the spin supplementary condition which naturally arises when deriving MPD equations from the perspective of Harte's generalized Killing vectors. This originates from the uniqueness of the worldline that the TD SSC defines. We end our short guided tour of the SSCs zoo here. We shall consider again SSCs from the Hamiltonian perspective in the last part of the thesis. Chapter <ref> will derive more identities and properties that hold when the TD SSC is obeyed. CHAPTER: BEYOND THE POLE-DIPOLE APPROXIMATION: SPIN-INDUCED MULTIPOLES Spin-induced multipoles We have now in our possession a closed system of equations for describing the motion of extended test bodies in generic curved spacetime, up to quadrupole order included. However, as already mentioned in Chapter <ref>, unlike the linear momentum and the spin dipole who both play the role of dynamical variables, the quadrupole moment J^μνρσ should be prescribed. The choice of this prescription will encode the various physical effects leading to the appearance of a quadrupole moment, and depends on the internal structure of the body. Because our main concern is the description of a binary black hole system, we will only consider here the so-called spin-induced quadrupole, that is, the quadrupole moment induced by the rotation of the test body. Other type of effects, such as tidal deformations <cit.> will be discarded here, since Kerr black holes can be shown to exhibit zero tidal deformability, see e.g. <cit.>. This choice is also coherent with our underlying idea of understanding the motion of spinning test bodies as a perturbative expansion in 𝒮 built upon the geodesic motion, since the presence of tidal-type quadrupole terms would disable us to recover the geodesic equations by taking the 𝒮→ 0 limit of the MPD equations. As it has been developed and applied in a number of works using various techniques (see e.g. <cit.>), the form of J^μνρσ appropriate to describe a spin-induced quadrupole moment is given (in a reparametrization invariant form) by J^μνρσ=κ3 p·v/(p^2)^2p^[μS^ν]λS^[ρ_λp^σ]+4. Spin-induced quadrupole moment At 2, it is therefore unique up to a response coefficient κ which controls the magnitude of the quadrupolar deformation, proportional to the square of the spin. Typical values for κ for a neutron star are in the range 4 to 8 <cit.>, while for a Kerr black hole κ is exactly equal to one, κ_BH=1. A broader introduction on the history of quadrupole and higher moments as well as their relations with PN expansion can be found in <cit.>. Actually, as implicitly stated in Eq. (<ref>), this quadrupole moment is not exact, but admits corrections that scale as 𝒪(𝒮^4), which are consequently discarded in the present text. As will be shown in the discussion, two of our main perspectives on the approximation scheme used to truncate the multipole expansion (either considering it as a truncation in the number of derivatives of the Riemann tensor allowed in the Lagrangian or as a truncation of the expansion in the spin magnitude) will reveal to be consistent one to another when solely spin-induced multipole moments are taken into account. In this short chapter, we aim to provide a pedagogical, self-contained derivation of the spin-induced quadrupole, expanding the procedure described in <cit.>. We will always assume that Tulczyjew-Dixon spin supplementary condition holds, as well as consider the background metric to be Ricci-flat. Notice that choosing the evolution parameter to be the proper time of the body, Eq. (<ref>) reduces to the simpler expression J^μνρσ= -3κ/μ v^[μΘ^ν][ρv^σ]+4, where Θ^αβ≜ S^αλSβλ. § SPIN-INDUCED MULTIPOLES FROM DIMENSIONAL ANALYSIS We aim to derive the most generic form of the spin-induced quadrupole term appearing in the MPD equations (<ref>) and (<ref>). Recall that the quadrupole tensor is defined in the Lagrangian formulation as J^αβγδ≜ -6LR_αβγδ. This can be straightforwardly extended to the 2^l+2-pole moment, which will be defined as J^αβγδμ_1…μ_l≜ C_l L(∇μ_1…∇_μ_lR_αβγδ), where C_l is an arbitrary normalization constant (for the quadrupole, C_0=-6). Without loss of generality, we can assume that it originates from a term present in the Lagrangian (<ref>) taking the form L∋1/C_l∇_μ_1…∇_μ_lR_αβγδJ^αβγδμ_1…μ_l. Notice that from its very definition, it is clear that the tensor J^αβγδμ_1…μ_l should possess the same algebraic symmetries than the Riemann tensor in its four first indices αβγδ. Thanks to dimensional analysis, we will constrain the form of such multipole tensors. We will always consider spin-induced multipoles, in the sense that they can only depend on p^μ, v^μ, S^μν, g_μν and R_αβγδ and that they should be vanishing when the spin vanishes itself. Under the Tulczyjew-Dixon spin supplementary condition, the relation between four-velocity and linear momentum allows to replace the dependence in the latter by a dependence in the mass μ. We will subsequently show that the form of this spin-induced quadrupole tensor is necessarily given by Eq. (<ref>), that is, it is unique at 2 up to an overall coefficient depending on the body's nature. §.§.§ Dimensional analysis To further constrain the form of the spin-induced quadrupole, we will use dimensional analysis. Restoring the dimensional character of the gravitational constant but still setting the speed of light to one (G≠ c=1), units of length are still equal to units of time, but not to units of energy (which are the same than units of mass), i.e. L=T≠ E=M. The dimensions of the relevant quantities for the following are given by <cit.> [g_μν]=[v^μ]=1, [μ]=[p^μ]=[L]=M, [∇_μ]=L^-1, [Rμνρσ]=L^-2, [S^μν]=ML, [J^αβγδμ_1…μ_l]=ML^l+2. As stated above, spin-induced multipole moments should only depend on the mass, the four-velocity, the metric and the Riemann tensor. In this text, we will by assumption exclude any non-trivial dependence in the Levi-Civita tensor, since this would break the parity <cit.>. Schematically (i.e. without writing explicitly the various contractions between indices), we can assume[Actually, this only amounts to assume that the corresponding term of the Lagrangian is smooth. It consequently admits an expansion in an integer power series, which is nothing but a polynomial containing infinitely many terms. The only important point is that these terms should take the form given in Eq. (<ref>).] that Eq. (<ref>) will take the form of a polynomial whose terms look like μ ^N_1[(∇_λ)^N_2(Rμνρσ)^N_3] (S^αβ)^N_4(v^κ)^N_5, with N_1∈ℤ, N_2,N_3,N_4,N_5∈ℕ and N_4>0. These choices for the coefficients N_i can be understood as follows: * We allow for negative powers of the mass μ to appear, in order to control the dimension of the expression; * Since we are considering spin-induced multipoles, they shall shall consistently tend to zero when 𝒮→ 0, yielding N_4>0. Moreover, the metric does not appear in Eq. (<ref>) since it is viewed as a dimensionless quantity that is only used to lower and raise indices. Performing a dimensional analysis of the expression Eq. (<ref>) with the dimensions given in Eq. (<ref>), it is easy to show that the following relations between the power coefficients hold: N_1=N_4-N_2/2, N_3=1-N_4, N_2,N_4 and N_5 arbitrary. The parameter N_2 controls the number of covariant derivatives of the Riemann tensor appearing in the expression, that is, the order of the approximation from the Lagrangian perspective. The number of copies of the four-velocity appearing in the expression is always left unconstrained from the dimensional analysis viewpoint, since it is a dimensionless quantity. As we will show in next section, this N_5 will actually be constrained by the number of non-equivalent, non-trivial contractions that can be made between the indices of Eq. (<ref>). The first non-trivial solutions to our dimensional analysis are schematically depicted in Table <ref>. We directly notice that, in the quadrupole approximation (that is, for N_2=0), the non-trivial term containing the smallest number of copies of the spin tensor corresponds to N_3=1, N_1=-1 and N_4=2, which thus scales as RSSv^N_5/μ. All the other consistent terms will include at least four copies of the spin tensor (the next one being RRSSSSv^N_5/μ^3), and they can consequently be discarded in our analysis, since we are only interested in (at most) quadratic terms in the spin. § EXPLICIT FORM OF THE SPIN-INDUCED QUADRUPOLE We will now focus on the 𝒪(𝒮^2) contribution to the quadrupole, which reads schematically 1/μRμνρσS^αβS^γδv^α_1… v^α_N_5. Finding the most generic expression for this spin-induced quadrupole amounts to list all the independent, non-trivial contractions that can be made between the free indices of Eq. (<ref>). Since we only consider Ricci-flat spacetimes, the Riemann tensor reduces to its trace-free part, namely the Weyl tensor <cit.>. Consequently, its indices cannot be contracted together. Moreover, it is not possible to contract the four-velocity with the spin tensor, since the TD SSC enforces the relation v_μ S^μν=3 to hold. Finally, contracting the four-velocity with itself leads to trivial contributions, since v_μ v^μ=-1 when the evolution is parametrized by the proper time. At the end of the day, we are only left with three (seemingly independent) combinations of the desired form that produce a well-defined scalar that can contribute to the Lagrangian: L_quad=C_1/μR_αβγδv^αΘ^βγ v^δ+ C_2/μR_αβγδS^αβS^γδ+C_3/μR_αβγδS^αγS^βδ+𝒪(𝒮^4), with C_1,C_2,C_3∈ℝ. We will prove now that these three terms are not independent and that there is only one relevant combination that can be written out of them. Obviously, from the algebraic Ricci identity R_α[βγδ]=0, one has R_αβγδS^αβS^γδ=2R_αβγδS^αγS^βδ. The two last terms of Eq. (<ref>) are consequently the same. Moreover, the Weyl tensor C_μνρσ can be shown to obey the identity <cit.> C_αβγδ=4g_ρ[αC_β]μν[γδ_δ]^ρ v^μ v^ν+2C_αβμ[γv_δ]v^μ+2C_γδμ[αv_β]v^μ. Therefore, one can show that C_αβγδS^αγS^βδ=-2C_αβγδv^αΘ^βγv^δ. The quadrupole term of the Lagrangian then reduces to L∋ L_quad=C_1-2(2C_2+C_3)/μR_αβγδv^αΘ^βγ v^δ+𝒪(𝒮^4), which gives rise to the spin-induced quadrupole tensor announced in Eq. (<ref>), provided we rename the overall constant C_1-2(2C_2+C_3)≜ 3κ in order to agree with the conventions widely used in the literature. One can check explicitly that this spin-induced quadrupole tensor obeys all the algebraic symmetries of the Riemann tensor. We have therefore demonstrated that the form of the spin-induced quadrupole is unique, up to an overall coupling coefficient κ whose numerical value depends on the nature of the test body. Discussing the value taken by κ goes beyond the scope of this thesis, and is actually quite involved, see e.g. <cit.> and references therein. [width=]cloud_iii.png PART: [Test Bodies in Kerr Spacetime: Conserved Quantities]Test Bodies in Kerr Spacetime: Conserved Quantities We have now in our possession of a system of equations for describing the motion of spinning test bodies in any curved spacetime, the MPD equations. In the continuation of this text, and for the reasons discussed in Chapter <ref>, we will always supplement them by the Tulczyjew-Dixon condition. The total system of equations obtained will be referred to as the MPTD equations. The MPTD equations take the form of a set of first order partial differential equations for the linear momentum p^μ and the spin S^μ. However, the very goal of anyone wanting to solve the MPTD equations is to obtain the position of the test body as a function of the proper time, z^μ(τ). With respect to the positions x^μ, the MPTD equations are a set of second order PDEs. As it is always the case when studying such a dynamical system, much information can be obtained if one is able to build first integrals of motion (or invariants or conserved quantities), i.e. functions of the dynamical variables 𝒬(p^α,S^α) that are constant along the motion: 𝒬̇(p^α,S^α)=0, ≜τ. As always in General Relativity, the existence of first integrals of motion will be strongly related with the presence of symmetries of the background spacetime (i.e. the existence of Killing vectors or Killing(-Yano) tensors). Building such conserved quantities will be the main concern of this part of the thesis. Because what we have in mind is the description of the dynamics of extreme mass-ratio inspirals, we will treat the problem perturbatively, order by order in the spin magnitude 𝒮. The conservation equation will thus be required to hold only up to some given order in 𝒮, giving rise to quantities which are only “quasi-conserved”. Nevertheless, we will often refer to them as conserved quantities. The reader should keep this remark in mind for what follows. We will push our analysis up to second order in 𝒮. This is the first order at which the universal character of the MPTD dynamics is broken, since the explicit form of the quadrupole moment will depend upon the internal structure of the test body. Moreover, we will take the force and torque terms to originate only from the spin-induced quadrupole moment introduced in Chapter <ref>. This choice provide an exact description for the secondary object being a black hole, since in that case it will not exhibit tidal deformability, see e.g. <cit.> and references therein. However, for the secondary being another type of compact object (e.g. a neutron star), tidal-type contributions will also arise at quadrupole order, see e.g. <cit.>. We expect these contributions to break the conservation of the so-called “hidden” constants of motion discussed below. As we will see, this expectation will turn out to be consistent with our findings, since such constants of motion will be shown to only exist for the spin-induced quadrupole coupling κ taking the value κ=1, which is precisely the one expected for a black hole. The various conserved quantities that can be built are summarized in Figure <ref>, and briefly discussed below. Most of them will take the form of (perturbative) deformations of the quantities conserved along the geodesic motion. §.§.§ “Explicit” constants of motion As it has been proven in Chapter <ref>, the spin magnitude 𝒮 is exactly conserved for any choice of spin supplementary condition. Moreover, up to 1, the dynamical mass μ^2=-p_α p^α is also invariant. At second order in 𝒮, μ is not conserved anymore, but we can still define a mass-like quantity μ̃ which is quasi-conserved, see Section <ref>. As shown by Dixon <cit.>, and as was central to his construction of the multipole moments, if the background has a Killing vector ξ^μ, then the quantity 𝒞_ξ=p_μξ^μ+1/2S^μν∇_μξ_ν is exactly conserved along any worldline when p^μ and S^μν are evolved by the MPD equations, to all orders in the multipole expansion, for arbitrary quadrupole and higher moments. (See also, e.g., the earlier derivation by Souriau for the pole-dipole system <cit.>, and the insightful exposition by Harte <cit.>.) Specializing to a background Kerr spacetime, the existence of its two Killing vector fields leads to the appearance of two conserved quantities, denoted ℰ and ℒ. They will naturally take the interpretation of energy and (projection onto the black hole axis of the) angular momentum of the test body. One is then naturally lead to wonder whether the hidden symmetry of Kerr spacetime leads to conserved quantities for the MPD dynamics, including a generalization of the Carter constant to the case of spinning extended test bodies. §.§.§ “Hidden” constants: to linear order in 𝒮 This question was answered for the case of the pole-dipole MPTD equations by R. Rüdiger <cit.>. First he showed that the quantity 𝒬_Y=Y^*_μνS^μν built from the Killing-Yano tensor Y_μν of Kerr is conserved, up to remainders quadra-tic in the spin tensor and quadrupolar corrections. Here Y^*_μν≜12ϵ_μναβY^αβ is the dual of the Kerr Killing-Yano tensor Y^μν. He further showed that there is indeed a generalized Carter constant of the form 𝒬_R=YμλY_νλp^μ p^ν+4 ξ^λϵ_λμσ[ρYσν]S^μνp^ρ which is conserved along the MPTD equations, up to remainders quadratic in the spin tensor and quadrupolar corrections that were not determined. A generalization of this result to the Kerr-Newman (charged spinning black hole) spacetime was independently discovered by Gibbons et al. <cit.> using a supersymmetric description of spinning particle dynamics. The existence of these constants of motion has been shown by Witzany to allow the separation of a Hamilton-Jacobi equation for the pole-dipole system in Kerr, leading to analytic expressions for the fundamental frequencies of the motion <cit.> using a Hamiltonian formalism for spinning test bodies <cit.>. The unanswered question of the existence of other independent quasi-conserved quantities for the MPTD equations led to an undetermined status of the role of integrability and by opposition, chaos, in the dynamics of spinning test bodies around Kerr. While chaos has been established to appear at second order in the spin <cit.>, numerical simulations suggest that no chaos occurs at linear order in the spin <cit.>. From the solution of the Hamilton-Jacobi equations at linear order in the spin, one can infer that chaotic motion at linear order is negligible <cit.>. In <cit.>, we have related the existence of new quasi-conserved quantities homogeneously linear in the spin to the existence of a new tensorial structure on the background, that we will refer to as a mixed-symmetry Killing tensor. This result applies to the Kerr background and more generally to Ricci-flat spacetimes admitting a Killing-Yano tensor. We have demonstrated that under the assumption of stationarity and axisymmetry, no such non-trivial structure exists on the Schwarzschild background and no non-trivial mixed-symmetry Killing tensor on Kerr can be constructed from deformations of trivial mixed-symmetry Killing tensors on Schwarzschild. §.§.§ “Hidden constants”: quadratic order in 𝒮 In <cit.>, we have explored whether such “hidden constants” exist for test bodies with spin-induced quadrupole moments moving in a Kerr background. As the central results of that paper, we established that two quantities, 𝒬_Y and 𝒬^(2)_BH, are conserved up to cubic-in-spin or octupolar corrections, 𝒬_Y/τ=𝒪(𝒮^3), 𝒬^(2)_BH/τ=𝒪(𝒮^3), along the motion of a “quadrupolar test black hole”, governed by the MPTD equations endowed with the spin-induced quadrupole term (<ref>), with κ=1, in a background Kerr spacetime, for arbitrary orbital and spin orientations. The first quantity 𝒬_Y is Rüdiger's linear-in-spin constant, unmodified. The second quantity 𝒬^(2)_BH is quadratic in p^μ and S^μν and generalizes Rüdiger's constant 𝒬_R to the quadrupolar order for a test black hole; it is given explicitly by 𝒬^(2)_BH = YμλY_νλp^μ p^ν+4 ξ^λϵ_λμσ[ρYσν]S^μνp^ρ +[ g_μρ(ξ_νξ_σ-1/2g_νσξ^2)-1/2Y_μ^λ(Y_ρ^κ R_λνκσ+1/2Y_λ^κ R_κνρσ) ]S^μνS^ρσ. §.§.§ Plan of the text This part of the text is structured as follows: Chapter <ref> will review the MPTD equations endowed with the spin-induced quadrupole term, and derive a number of useful identities. Rüdiger's procedure for building conserved quantities through resolving explicitly the conservation equations will then be discussed, and illustrated on two simple examples. Subsequently, Chapters <ref> and <ref> will apply this procedure, respectively at first and second order in the spin magnitude 𝒮. We shall warn the reader that these two last chapters consist into long and very technical derivations, which only lead to the results exposed in this introduction. CHAPTER: BUILDING CONSERVED QUANTITIES This chapter aims to introduce in a somehow pedagogical way the general scheme introduced by R. Rüdiger in the early eighties <cit.> for finding quantities that are conserved for the motion of a test body driven by the MPD equations. Even if it is conceptually simple, the actual computations required by this scheme turn out to be very cumbersome. This is the reason why this chapter introduces the procedure conceptually, before applying it on the two simplest examples one can find: generic conserved quantities for geodesic motion on a generic curved background, and linear (in the sense defined in Section <ref>) conserved quantities for the MPD linearized equations. The “real” computations will be delayed to the two following chapters. § REVIEW: MPD EQUATIONS WITH SPIN-INDUCED QUADRU-POLE UNDER THE TULCZYJEW-DIXON SSC Before tackling this program, let us recall ourselves that MPD equations only form a closed system of equations when a spin supplementary condition is enforced, see Chapter <ref>. In this part of the thesis, we will always consider MPD equations endowed with the spin-induced quadrupole term and supplemented by the Tulczyjew-Dixon SSC. The full system of equations will be referred to as the MPTD equations. This section aims to both review these equations and their properties, introduce some useful notations and derive relations that will become of prime importance for the continuation of this work. This section is somehow redundant with Part <ref> of the text, but aims to gather all the results relevant for the forthcoming computations. §.§ Mathisson-Papapetrou-Dixon equations As it has been extensively discussed in Part <ref> of the thesis, the motion of an extended test body over a curved background is described – in General Relativity – by the Mathisson-Papapetrou-Dixon (MPD) equations Dp^μ/τ =-1/2Rμναβv^ν S^αβ+ℱ^μ, DS^μν/τ =2 p^[μv^ν]+ℒ^μν. where we defined the tangent vector to the worldline as v^μ≜x^μλ (λ being any affine parameter) and where λ≜ v^μ∇_μ is the covariant derivative along the worldline. ℱ^μ and ℒ^μν are force and torque terms that shall be specified, depending upon the multipole moments the body is described with. We also introduce the notations 𝔪 ≜ -p^μ v_μ, μ^2 ≜-p^μ p_μ, 𝒮^2 ≜1/2S^μνS_μν. Here, μ^2 is the dynamical rest mass of the object, i.e. the mass of the object measured by an observer in a frame where the spatial components of the linear momentum p^i do vanish; 𝔪 will be referred to as the kinetic mass and 𝒮 as the spin magnitude. At this point, let us emphasize that the linear momentum is no aligned with the velocity, and thus not tangent to the worldline, since contracting Eq. (<ref>) with v^ν yields p^μ=1/v^2(v_αS^μαλ-𝔪 v^μ). The dynamical and kinetic masses are in general not constants of motion: in fact, one can show that 𝔪λ =-1/v^2v_αλ v_βS^αβλ, μλ =-1/μ 𝔪p_αp_βλS^αβλ. As discussed in Chapter <ref>, the spin magnitude is exactly conserved for any choice of multipoles: (𝒮^2)λ=0. §.§ The spin supplementary condition We supplement the MPD equations by enforcing the covariant Tulczyjew-Dixon spin supplementary condition S^μνp_ν=0. The TD SSC forms a set of three additional constraints since a contraction with p_μ leads to a trivial identity. In what follows, we will choose the affine parameter driving the evolution as the body's proper time, λ=τ. This enforces the four-velocity to be normalized, v^μ v_μ=-1, which thereby guarantees its timelike nature along the evolution of the system. These conditions consequently close our system of equations which is then call the MPTD equations. These conditions fix uniquely the worldline and allow to invert Eq. (<ref>) in order to express the four-velocity as a function of the linear momentum. §.§ Spin-induced quadrupole approximation We will consider only spin-induced multipole moments and work in the quadrupole approximation, i.e. neglecting octupole and higher moments. This is the relevant approximation for addressing spin-squared interactions: considering only spin-indu-ced multipole terms, the 2^n-pole scales as 𝒪(𝒮^n), with 𝒮^2≜1/2S_μνS^μν. At the level of the equations of motion, this corresponds to choose the force and torque given by ℱ^μ =-1/6J^αβγδ∇^μ R_αβγδ, ℒ^μν=4/3R[μαβγJ^ν]αβγ. The quadrupole tensor J^μνρσ possesses the same algebraic symmetries as the Riemann tensor. We further particularize our setup by considering only a quadrupole moment that is induced by the spin of the body, discarding the possible presence of some intrinsic quadrupole moment. This spin-induced quadrupole was shown to take the form given in Eq. (<ref>), see e.g. <cit.>. Specialized to the case v_α v^α=-1 and at leading order it reduces to J^μνρσ=3κ/μv^[μS^ν]λSλ[ρv^σ] = -3κ/μ v^[μΘ^ν][ρv^σ], where Θ^αβ≜ S^αλSβλ. Here κ is a free coupling parameter that equals 1 for a Kerr black hole and takes another value if the test-body is another compact object, e.g. a neutron star. §.§ The spin vector Having imposed the Tulczyjew-Dixon condition, all the information contained in the spin-dipole tensor can be recast into a spin vector S^μ. Indeed, if one defines[The convention chosen here differs from the one of <cit.> by a global `-' sign, but agrees with our previous publications <cit.>.] S^α≜1/2ϵ^αβγδp̂_β S_γδ where p̂^μ≜p^μ/μ (yielding p̂^2=-1), one can invert the previous relation in order to rewrite S^αβ in terms of S^α: S^αβ=-ϵ^αβγδp̂_γ S_δ. This can be easily checked thanks to the identity <cit.> ϵ^α_1…α_jα_j+1…α_nϵ_α_1…α_jβ_j+1…β_n=-(n-j)! j! δ^[α_j+1…α_n]_β_j+1…β_n which is valid in any n-dimensional Lorentzian manifold. We make use of the shortcut notation δ^μ_1…μ_N_ν_1…ν_N≜δ^μ_1_ν_1…δ^μ_N_ν_N. By definition, the spin vector is automatically orthogonal to the linear momentum: p_μ S^μ=0. Finally, also notice that the spin parameter is simply the squared norm of the spin vector, 𝒮^2=S^α S_α . §.§ Conservation of the spin, mass; relation between four-velocity and linear momentum Differentiating the SSC Eq. (<ref>) yields μ^2 v^μ-𝔪 p^μ=1/2S^μνR_νλρσv^λ S^ρσ-ℒ^μνp_ν- S^μνℱ_ν. Contracting this equation with v_μ provides us with μ^2=𝔪^2+𝒪(𝒮^3). The expression of the linear momentum in terms of the four-velocity reads p^μ=μ v^μ-1/2μS^μνR_νλρσv^λ S^ρσ+ℒ^μνv_ν+𝒪(𝒮^3). In the quadrupole approximation, the dynamical mass μ is no longer conserved at 𝒪(𝒮^2), since μτ=-v_μℱ^μ+𝒪(𝒮^3). However, notice that, provided we assume[This condition is automatically satisfied for the spin-induced quadrupole.] D/τJ^αβγδ=𝒪(𝒮^3), one can still define a mass-like quantity, given by μ̃≜μ-1/6J^αβγδR_αβγδ, which is quasi-conserved independently of the spin, namely μ̃τ=𝒪(𝒮^3). Moreover, one can perturbatively invert (<ref>) to obtain an expression of the four-velocity in terms of the linear momentum and the spin: v^μ = p̂^μ +(D^μν-1/μℒ^μν)p̂_ν+𝒪(𝒮^3), with p̂^α≜p^α/μ=p^α/μ̃+𝒪(𝒮^2), Dμν ≜1/2μ^2S^μλR_λνρσS^ρσ. Eq. (<ref>) will play a central role when we will work out the conservation equations in the following chapters. §.§ Projector and Hodge dualities: useful relations §.§.§ Projector on the hypersurface orthogonal to p^μ We introduce Π^μ_ν≜δ^μ_ν+p̂^μp̂_ν, the projector onto the hypersurface orthogonal to p^μ. It can be directly checked from the definition that Π^μ_ν satisfies the properties: (I) Π^μ_ν Π^ν_ρ=Π^μ_ρ, (II) Π^μ_ν p^ν=0, (III) Π^μ_α S^αν=S^μν. §.§.§ Hodge duality Given any tensor 𝐀, one can define the left and the right Hodge duals as ^*A_μνα_1…α_p ≜1/2ϵμνρσA_ρσα_1…α_p, A^*_α_1…α_pμν ≜1/2ϵμνρσA_α_1…α_pρσ. They correspond, respectively, to the Hodge dualization on the two first, resp. the two last, indices of 𝐀. The definition of the bidual ^* 𝐀^* follows directly from the definitions above. Finally, given the product of two antisymmetrized vectors, one can similarly define l^[μm^ν]*≜1/2ϵ^μνρσl_ρ m_σ. The dual tensors obey the following properties: (I) A^*_μν=^*A_μν, (II) A^**_μν=^**A_μν=-A_μν, (III) ^*A_[μν]B^[μν]=A_[μν]^*B^[μν]. Moreover, for any geometry we have ^*Rαμαν=1/2ϵαββγR_[βγα]ν=0. Notice also that Bianchi's identities R_αβ [μν ; σ]=0 can be equivalently written as R*αβγσ;σ= 0. §.§.§ Identities for the MPTD equations Let us turn to the more specific context of the MPTD theory. Using all the previous definitions, it is not complicated to check that the following properties hold: (I) S^αβ=2S^[αp̂^β]*, (II) Π^α[βS^γ]*=S^α[βp̂^γ]. One can show that the first identity implies the following decomposition for Θ^αβ: Θ^αβ =Π^αβ𝒮^2-S^α S^β. Moreover, one has the identity Dμαp^α =1/2μS^μ[νp̂^α]R_ναρσS^ρσ =1/μΠ^μ[νS^α]*R_ναρσS^[ρp̂^σ]* =1/μΠ^μα^*R^*_αβγδS^β S^γp̂^δ, as well as ℱ^μ =κ/2μp̂^αΘ^βγp̂^δ∇^μ R_αβγδ+𝒪(𝒮^3) , ℒ^μν =2κ/μRναβγv^[μΘ^α]βv^γ-(μ↔ν), ℒ^μνv_ν =κ/μ(p̂^μp̂^ν R_ναβγ+Rμαβγ)Θ^αβp̂^γ+𝒪(𝒮^4) = κ/μΠ^μν R_ναβγΘ^αβp̂^γ +𝒪(𝒮^4). §.§ Independent dynamical variables Let us now summarize the independent dynamical variables of the MPTD system. Under the SSC (<ref>), one can write μ =μ(p^α)=√(-p_α p^α), 𝔪 = 𝔪 (p^α,S^α), S^μν =S^μν(p^α,S^α)=-ϵ^μναβp̂_α S_β, v^μ =v^μ(p^α, S^α). The explicit expression for v^μ has been worked out in Eq. (<ref>). Consequently, the system can be fully described in terms of the dynamical variables x^μ, p^μ and S^μ. However, the four components of the spin vector S^μ are not independent, since they are subjected to the orthogonality condition p^μ S_μ=0. This fact leads to complications when we will seek to build invariants, as will be detailed in Section <ref>. To overcome this difficulty, we introduce the relaxed spin vector s^α from S^α≜Π^α_β s^β where the part of 𝐬 aligned with 𝐩 is left arbitrary, but is assumed (without loss of generality) to be of the same order of magnitude. It ensures the relation 𝒪(s)=𝒪(𝒮) to hold (where s^2≜ s_α s^α). While working out the conservation constraints, one will often encounter the spin vector antisymmetrized with 𝐩, in expressions of the type p^[μS^ν]. In that case, we can directly write p^[μS^ν]=p^[μs^ν], thereby replacing the (constrained) S^α by the (independent) variables s^α. §.§ MPTD equations at linear order in the spin As discussed in the introduction, the complexity of the constraint equations will motivate us to proceed perturbatively: one will first seek for solutions in the linearized theory, thus neglecting all 𝒪(𝒮^2) terms. The solutions valid a second order in 𝒮 will then be build as perturbations on the top of the previously found linear solutions. It is therefore relevant to understand how MPTD equations simplify at linear order in 𝒮. Neglecting all 𝒪(𝒮^2) terms, Eq. (<ref>) leads to the usual relation between the linear momentum and the four-velocity, p^μ=μ v^μ. Once linearized in the spin, the MPD equations (<ref>) reduce to D p^μ/τ =f^(1)μ_S≜-1/2μRμναβp^ν S^αβ, D S^μν/τ =0 ⇔ D S^μ/τ=0, which are, respectively, the forced geodesic equation with force f^(1)μ_S=𝒪(𝒮^1) and the parallel transport equation of the spin tensor/vector studied e.g. in <cit.>. § RÜDIGER'S PROCEDURE In two papers published in the early 80s, R. Rüdiger described a scheme for constructing quantities conserved along the motion driven by the MPTD equations <cit.>. The basic guideline followed in his scheme was to enforce directly the conservation equation on a generic Ansatz for the conserved quantity, and to subsequently solve the constraints obtained. We summarize here this procedure in an abstract way, before applying it to simple concrete examples in the next section. * Step 1: postulate an Ansatz for the conserved quantity. The conserved quantity should be a function of the dynamical variables p^μ and S^μ. It is therefore a function 𝒬(x^μ,S^μ,p^μ). Assuming its analyticity, it can be expanded as 𝒬(x^μ,S^α,p^μ)=∑_s,p≥0 s+p>0𝒬^[s,p](x^μ,S^α,p^μ) with 𝒬^[s,p](x^μ,S^α,p^μ) ≜𝒬^[s,p]_α_1…α_sμ_1…μ_p(x^μ)S^α_1… S^α_sp^μ_1… p^μ_p. Expressions like this one – that is, tensorial quantities fully contracted with occurrences of the linear momentum and the spin – will often appear in the following computations. It is useful to enable a distinction between them by introducing a grading allowing the counting of the number of occurrences of both the spin vector S^μ and the linear momentum p^μ, which is provided by the notation [s,p]. More generally, we define: A fully-contracted expression of the type T_α_1…α_sμ_1…μ_pℓ_s^α_1…ℓ_s^α_sℓ_p^μ_1…ℓ_p^μ_p where ℓ_s^α=S^α,s^α (the relaxed spin vector s^μ will be defined below) and ℓ_p^μ=p^μ,p̂^μ is said to be of grading [s,p]. Equivalently, s (resp. p) will be referred to as the spin (resp. momentum) grading of this expression. Since we have only included the quadrupole term in the equations of motion but neglected all the 𝒪(𝒮^3) terms, it is not self-consistent to look at quantities which are conserved beyond second order in the spin magnitude. We therefore restrict our analysis to Ansätze that contain terms of of spin grading at most equal to two. Historically, Rüdiger didn't consider the full set of possible Ansätze originating from this discussion, but only the two restricted cases 𝒬^(1) ≜∑_p=1Q^[s,p]≜ X_μ p^μ+W_μνS^μν, 𝒬^(2) ≜∑_p=2Q^[s,p]≜ K_μνp^μ p^ν+L_μνρS^μνp^ρ+M_μνρσS^μνS^ρσ. We will refer to them as respectively the linear and the quadratic invariants in p^μ. They are homogeneous in the number of occurrences of p^μ and S^μν they contain. As long as we consider the MPD equations at linear order in the spin magnitude or at quadratic order with the quadrupole coupling of the test body being the one of a black hole (κ=1), it turns out that considering only these two types of ansatzes will be enough to derive a complete set of conserved quantities. However, a more general ansatz will be necessary to consider arbitrary quadrupole couplings (κ≠ 1), as discussed in Section <ref>. * Step 2: write down the conservation equation. Because we always consider the MPTD equations truncated up to some order in 𝒮, it is consistent to require our quantities to be only quasi-conserved. For the MPDT equations including terms up to 𝒪(𝒮^n) included, we only require the conservation to hold up at order n+1 in the spin magnitude: 𝒬̇≜ v^λ∇_λ𝒬!=𝒪(𝒮^n+1). * Step 3: expand the conservation equation using the equations of motion. The next step is to plug the explicit form of the Ansatz chosen in the conservation equation, and to use the MPD equations (<ref>) to replace the covariant derivatives of the linear momentum and of the spin tensor. The occurrences of the four-velocity are replaced by the means of Eq. (<ref>). * Step 4: express the conservation equation in terms of independent variables. As already discussed above, the presence of the SSC make the variables p^μ, S^αβ not independent among themselves. We turn to an independent set of variables in two steps: (i) we use the relation S^αβ=2S^[αp̂^β]* to replace all the spin tensors S^μν by the spin vectors S^μ and (ii) we replace the occurrences of the spin vector by the relaxed spin vector s^α defined through S^α=Π^α_β s^β. It allows to relax the residual constraint S_μ p^μ=0 by considering a spin vector possessing a non-vanishing component along the direction of the linear momentum. Physical quantities will be independent of this component. It is introduced in order to decouple the conservation equation. For convenience, we scale the unphysical component of the relaxed spin vector such that s_α s^α∼ S_α S^α=𝒮^2. Notice that we have the useful identity S^[αp^β]=s^[αp^β] ⇒ S^αβ=2s^[αp̂^β]*. * Step 5: infer the independent constraints. The conservation equation takes now the form of a sum of fully-contracted expressions of the type (<ref>), involving only the independent dynamical variables p^μ and s^α: 𝒬̇=∑_s,p≥ 0 s+p>0T^[s,p]_α_1…α_sμ_1…μ_ps^α_1… s^α_s p^μ_1… p^μ_p!=𝒪(𝒮^n+1). The conservation equation is then equivalent to the requirement that all the terms of different gradings [s,p] vanish independently: T^[s,p]_α_1…α_sμ_1…μ_ps^α_1… s^α_s p^μ_1… p^μ_p!=𝒪(𝒮^n+1). s^α and p^μ being arbitrary, this is equivalent to the constraint equations T^[s,p]_(α_1…α_s)(μ_1…μ_p)!=𝒪(𝒮^n+1). * Step 6: find a solution and prove uniqueness. This final step is non-systematic. For the simplest cases (linear invariant with black hole quadrupole coupling, quadratic invariant at first order in the spin magnitude), it will be sufficient to work only with the tensorial constraints (<ref>). However, for more involved cases (e.g. quadratic invariant at second order in 𝒮), the tensorial relations will become so cumbersome that turning to another formulation of the problem will appear to be fruitful. This will be the purpose of the covariant building blocks for Kerr introduced in Section <ref>. § THE SIMPLEST EXAMPLES In this final section, we apply Rüdiger's procedure to the two simplest examples: generic polynomial invariants for geodesic motion and linear invariants for linearized MPTD equation. §.§ Generic polynomial invariants for geodesic motion In the geodesic case, the linear momentum is tangent to the worldline, p^μ=μ v^μ and geodesic equations are simply the 𝒪(𝒮^0) MPTD equations: D p^μ/τ=0, D/τ≜ v^α∇_α. In most GR textbooks and lectures, the problem is tackled from the perspective “symmetry implies conservation”: one first introduces the notion of Killing vector fields (∇_(αξ_β)=0) and then prove the well-known property stating that, given any geodesic of linear momentum p^μ and a Killing vector field ξ^μ of the background spacetime, the quantity C_ξ≜ξ_α p^α is constant along the geodesic. One also shows that this property generalizes in the presence of a Killing tensor, and that the invariant mass μ^2=-p_α p^α is also constant along the geodesic trajectory. Along a geodesic, the conservation equation for a quantity C(p^α) takes the form Ċ(p^α)=0 ⇔ p^μ∇_μ C(p^α)=0. In applying Rüdiger algorithm, we will tackle the problem in the opposite way (“conservation requires symmetry”): given an arbitrary geodesic, is it possible to construct invariants of motion that are polynomial quantities of the linear momentum, i.e. that are composed of monomials of the form C_𝐊^(n)≜ K_α_1…α_np^α_1… p^α_n where, at this point, 𝐊 is an arbitrary, by definition totally symmetric tensor of rank n? We will show that requiring the conservation of C_𝐊^(n) will require either 𝐊 to be a Killing vector/tensor or either that C_𝐊^(2) is the invariant mass. In order to work out the most general constraint on 𝐊, we plug the definition of C_𝐊 (<ref>) into the conservation equation (<ref>). Using the geodesic equation (<ref>) and relabelling the indices, one gets p^μ∇_μ K_α_1…α_n p^α_1… p^α_n=0. The crucial point is that the dynamical variables p^α are independent among themselves. The above relation must hold for any values of the independent p^α, yielding the general constraint ∇_(μK_α_1…α_n)=0. The only possible cases for solving this constraint are the following: * for n=1, K_μ must be a Killing vector, ∇_(μK_ν)=0; * for n=2, either K_μν must be a rank-2 Killing tensor (∇_(μK_νρ)=0), either one takes K_μν=g_μν which leads to the conservation of the invariant mass, C_𝐠^(2)=-μ^2; * for any n≥3, 𝐊 must be a rank-n Killing tensor. Before turning to the spinning body case, let us make a couple of remarks: * As stated above, the viewpoint adopted here is reversed with respect to the `traditional' one: we have proven that the existence of conserved quantities along geodesic trajectories that are polynomial in the linear momentum require the existence of symmetries of the background spacetime (except for the invariant mass μ which is always conserved). * Any linear combination of the invariants defined above remains of course invariant. Nevertheless, the conservation can be checked separately at each order in 𝐩 because the application of the conservation condition (<ref>) doesn't change the order in 𝐩 of the terms contained in the resulting expression. * The invariant related to a Killing tensor is relevant only if the latter is irreducible, i.e. if it cannot be written as the product of Killing vectors. Otherwise, the invariant at order n in 𝐩 is just a product of invariants of lower order. §.§ Linear invariants for the linearized MPTD equations We now turn to an example which exhibits a non-trivial dependence in the spin. We will seek for a quantity conserved for the linearized MPTD equations (<ref>) of the form given in Eq. (<ref>), that is 𝒬^(1)≜ X_μ p^μ + W_μνS^μν. Notice that, by construction, W_μν shall be an antisymmetric tensor. Using Eqs. (<ref>), the conservation equation 𝒬̇^(1)=𝒪(𝒮^2) becomes v^λ∇_λ X_μ p^μ+v^λ∇_λ W_μνS^μν-1/2X_μRμναβv^ν S^αβ+2W_μνp^μ v^ν=2. Expressing this equation in terms of the independent variables p̂^μ and s^α yields μ∇_μ X_νp̂^μp̂^ν-(2∇_μ W^*_να-X_λ R*λμνα)s^αp̂^μp̂^ν=2. It is equivalent to the set of two independent constraints [0,2]: ∇_(μX_ν)=2, [1,2]: 2∇_(μW^*_ν)α-X^λ R_*λ(μν)α=2. The development above can be summarized as follows: For any pair (X_μ,W_μν) satisfying the constraint equations (<ref>) and assuming the linearized MPTD equations (<ref>) are obeyed, the quantity 𝒬^(1) given in Eq. (<ref>) will be conserved up to first order in the spin parameter, i.e. 𝒬̇^(1)=𝒪(𝒮^2). The set of equations (<ref>) admits two independent solutions: * For X^μ≠ 0, Eq. (<ref>) is satisfied provided that X^μ is a Killing vector. Recall that for any Killing vector X^μ, one has the Kostant formula <cit.> ∇_μ∇_ν X_α=X_λ Rλμνα and the second constraint Eq. (<ref>) will admit a solution if the stronger constraint 2∇_μ(W_να-1/2∇_νX_α)=2 holds. It is naturally solved by W_να=1/2∇_ν X_α. At the end of the day, we have shown that the quantity 𝒞_X=X_μp^μ+1/2∇_μX_νS^μν Conserved quantity for Killing vectors is conserved up to 2 corrections along the motion generated by the linearized MPTD equations (<ref>). This is the generalization to the case of spinning objects of the quantity conserved for Killing vector along geodesics. Actually, one can show that this quantity is exactly conserved at any order of the multipole expansion, regardless to the nature of the multipoles considered <cit.>. * The constraints equations (<ref>) admit another independent solution, which corresponds to X^μ=0. In that case, the second equation (<ref>) reduces to ∇_(μW^*_ν)α=2. It is satisfied provided that W^*_μν is a Killing-Yano tensor. Denoting Y_μν=W^*_μν, the corresponding conserved quantity is 𝒬_Y=Y^*_μνS^μν. Rüdiger linear invariant This is Rüdiger linear invariant, which was first unravelled by R. Rüdiger in 1981 <cit.>. CHAPTER: SOLVING THE CONSTRAINTS: FIRST ORDER IN THE SPIN First order in the spin In this chapter, we will be concerned with the construction of quadratic invariants for the linearized MPTD equations (<ref>). It is thus consistent to consider a quadratic invariant that is at most linear in 𝒮: 𝒬^(2)≜ K_μνp^μ p^ν+L_μνρS^μνp^ρ. The tensors K_μν and L_μνρ, which are by definition independent of p^μ and S^μν, satisfy the algebraic symmetries K_μν=K_(μν) and L_μνρ=L_[μν]ρ. In Ricci-flat spacetimes that admit a Killing-Yano tensor on their background, we will show that the conservation equation Eq. (<ref>) admits a solution which is a generalization of the usual Carter constant, already found by R. Rüdiger in 1983 <cit.>. The uniqueness of this construction will be proven in Kerr spacetime. This chapter is organized as follows. In Section <ref>, we explicit the set of constraints that must be fulfilled for such an invariant at most linear in the spin to exist. Conservation at linear order requires to solve only two constraints. We simplify the second, most difficult, constraint in Section <ref>. We subsequently particularize our setup in Section <ref> to spacetimes admitting a Killing-Yano (KY) tensor. After deriving some general properties of KY tensors, we will prove a cornerstone result for the continuation of our work, which we will refer to as the central identity. Building on all previous sections, we will solve the aforementioned constraint for Ricci-flat (vacuum) spacetimes possessing a KY tensor in Section <ref>. This will enable us to study in full generality the quasi-invariants for the MPTD equations that are quadratic in the combination of spin and momentum. On the one hand, we recover Rüdiger's results <cit.>. On the other hand, we prove that the existence of any further quasi-invariant, which is then necessarily homogeneously linear in the spin, reduces to the existence of a non-trivial mixed-symmetry Killing tensor on the background. The significance of this result is examined for spinning test bodies in Kerr spacetime in the final Section <ref>. We show that a stationary and axisymmetric non-trivial mixed-symmetry Killing tensor does not exist on the Kerr geometry. Consequently, an additional independent quasi-constant of motion for the linearized MPTD equations does not exist. As will be detailed later on in Chapter <ref>, the linearized MPTD integrals of motion are not in involution, which implies that the system is not integrable in the sense of Liouville. § CONSTRAINTS FOR A QUADRATIC INVARIANT LINEAR IN : POLE-DIPOLE MPTD EQUATIONS, TO QUADRATIC ORDER IN THE SPIN Despite we will hereafter focus on the linearized MPTD equations, we will derive the generic constraints equation for the quantity Eq. (<ref>) to be conserved for the full pole-dipole MPTD equations, up to second order in the spin. Even the pole-dipole equations (that is, the MPTD equations with the force and torque terms being set to zero) only make sense at linear order in the spin 𝒮, it is computationally relevant to derive here the constraint they generate at 2. This will allow us (in the next chapter) to disentangle the terms originating from the pole-dipole sector of the equations from the ones originating from the quadrupole sector. The main advantage taken will be to significantly reduce the computational load that one will encounter. §.§.§ Useful identity for the four-velocity Let us start by deriving a useful identity. It can be shown <cit.> that the following relation holds exactly: v^μ=𝔪/μ^2(p^μ+Dμαp^α/1-d/2), d ≜ Dαα. Moreover, one can show that d=-2/μ^2^*R^*_αβγδ S^αp̂^β S^γp̂^δ. Putting all the pieces together, one can rewrite Eq. (<ref>) as μ^2/𝔪(1-d/2)v^μ =(1-d/2)p^μ+Dμαp^α =p^μ-1/μ(p̂^μp̂^α-Π^μα)^*R^*_αβγδS^β S^γp̂^δ =p^μ+1/μg^μα^*R^*_αβγδS^β S^γp̂^δ =p^μ+1/μ^*R^*μβγδS^β S^γp̂^δ. This relation will be a fundamental building block of the forthcoming computations. §.§.§ Reduction of the conservation equation Using the MPTD equations (<ref>) and the expression (<ref>) for the four-velocity, the conservation equation for the quantity (<ref>) can be written 𝒬̇^(2) =Ξ(p^λ+1/μλκθσS^κ S^θp̂^σ) [(∇_λ K_μν-2L_λμν)p^μ p^ν + (∇_λ L_αβμ-K_μρRρλαβ)S^αβp^μ-1/2L_αβρRρλγδS^αβS^γδ]!=3. Here, the coefficient Ξ≜𝔪^2/μ^2(1-d/2) is non-vanishing. Let us introduce the tensors U_αβγ ≜∇_γ K_αβ-2L_γ(αβ), V_αβγδ ≜∇_δ L_αβγ-K_λγRλδαβ+2/3K_λρRλδ[α^ρ g_β]γ, W_αβγδϵ ≜-1/2L_αβλRλγδϵ. By construction, they obey the algebraic symmetries U_αβγ=U_(αβ)γ, V_αβγδ=V_[αβ]γδ, W_αβγδϵ=W_[αβ]γδϵ. Notice that the orthogonality conditions S^αβp_β=0 imply 2/3K_λρRλδ[α^ρ g_β]γS^αβp^γ=0. With these notations, he conservation equation simplifies to 𝒬̇^(2) =Ξ(p^λ+1/μλκθσS^κ S^θp̂^σ) ×[U_μνλp^μ p^ν+V_αβμλS^αβp^μ+W_αβλγδS^αβS^γδ]!=3. We will now go through a number of steps in order to express this condition in terms of the independent variables s_α and p̂^μ. First, let us expand all terms and express the spin-related quantities in terms of the independent variables s^α. For this purpose, we will make use of the aforementioned identities p^[αS^β] =p^[αs^β], S^αβ=2S^[αp̂^β]*=2s^[αp̂^β]*, S^α=Π^α_β s^β. The conservation equation becomes 𝒬̇^(2) =Ξ/μ[μ^4 U_μνρp̂^μp̂^νp̂^ρ +2 μ^3_αμνρs^αp̂^μp̂^νp̂^ρ +μ^2(4_αμνβρ+λκαρU_μνλΠ^κ_β) s^α s^βp̂^μp̂^νp̂^ρ + 2μλκαρ_γμνλΠ^κ_β s^α s^β s^γp̂^μp̂^νp̂^ρ +4 λκαρ_δμλγνΠ^κ_β s^α s^β s^γ s^δp̂^μp̂^νp̂^ρ]!=3. Second, we will remove the projectors. One has the identity λκαρΠ_β^κ s^α s^β=- Iλαβρσκs_α s_βp̂^σp̂^κ where we have defined Iλαβρσκ≜λκρ^αδ_σ^β+λαβρg_σκ. The proof is easily carried out, using the fact that p̂_μp̂^μ=-1: λκαρΠ_β^κ s^α s^β = λκαρ(δ^κ_β+p̂^κp̂_β) s^α s^β =[(-p̂^σp̂^κ g_σκ) λβαρ+λκαρp̂^κδ_β^σp̂_σ]s^α s^β =-(λκρ^αδ^β_σ+λαβρg_σκ)s_α s_βp̂^σp̂^κ. Using this identity, the conservation equation finally reads as 𝒬̇^(2) =Ξ/μ[μ^4 U_μνρp̂^μp̂^νp̂^ρ+2μ^3 αμνρs_αp̂^μp̂^νp̂^ρ -μ^2(IλαβρσκU_μνλ+4αμνβρ g_σκ)s_α s_βp̂^μp̂^νp̂^ρp̂^σp̂^κ -2μ Iλαβρσκγμνλs_α s_β s_γp̂^μp̂^νp̂^ρp̂^σp̂^κ -4 Iλαβρσκγμλδν s_α s_β s_γ s_δp̂^μp̂^νp̂^ρp̂^σp̂^κ] !=3. Because the variables p̂^μ and s_α are independent, Eq. (<ref>) is equivalent to the following set of three constraints, each of them arising at a different order in the spin parameter: [0,3]: U_(μνρ)=3, [1,3]: α(μνρ)=3, [2,5]: Iλ(αβ)(μνρU_σκ)λ+4(α(μνβ)ρ g_σκ)=3, Notice that the [0,3] constraint (<ref>) simply reduces to ∇_(αK_βγ)=0, i.e. K_μν must be a Killing tensor of the background spacetime. The [1,3] constraint (<ref>) is more difficult to work out. In Section <ref>, we will proceed to a clever rewriting of this constraint, which will then be particularized to spacetimes admitting a Killing-Yano tensor in Section <ref>. Section <ref> will aim to solve it generally. Finally, all these results will be particularized to a Kerr background in Section <ref>. The [2,5] constraint Eq. (<ref>) will be a fundamental building block for computations of Chapter <ref>. § CONSERVATION EQUATION AT LINEAR ORDER IN THE SPIN We will now proceed to the aforementioned rewriting of the constraint (<ref>) by introducing a new set of variables. The three first parts of this section are devoted to the derivation of preliminary results, that will be crucial for working out the main result. §.§ Dual form of 𝐕 We want to compute the dual form of the tensor V_αβγδ ≜∇_δ L_αβγ-K_λγRλδαβ+2/3K_λρRλδ[α^ρ g_β]γ with respect to its two first indices. One has _αβγδ=∇_δ_αβγ-K_λγR^*λδαβ+2/3K_λρRλδ[αρg_β]*γ. The last term of this equality can be written as 2/3K_λρRλδ[αρg_β]*γ =1/3K^λρϵαβμνR_λδμρg_νγ =1/3K^λρϵαβμγR_λδμρ =1/3K^λ[μϵαβν]γR^**_λδμν =1/3K^λ[μϵαβν]*γR^*_λδμν =-1/6ϵ^σμνρϵ_σαβγK_λρR*λδμν =1/3(K_λγR*λδαβ+K_λαR*λδβγ+K_λβR*λδγα) =1/3K_γλR*λδαβ+2/3R*λδγ[αK_β]λ. This finally yields _αβγδ=∇_δ_αβγ-2/3K_λγR*λδαβ+2/3R*λδγ[αK_β]λ. §.§ Rüdiger variables Following Rüdiger <cit.>, let us introduce X̃_αβγ≜ L_αβγ-1/3(λ_αβγ+g_γ[α∇_β]K), where we have made use of the notations λ_αβγ≜ 2∇_[αK_β]γ, K≜ K^α_ α. The irreducible parts X_α and X_αβγ of X̃_αβγ are defined through the relation X̃_αβγ≜ X_αβγ+ϵ_αβγδX^δ, with X_[αβγ]!=0. They provide an equivalent description, since Eq. (<ref>) can be inverted as X_αβγ =X̃_αβγ-X̃_[αβγ], X^α =1/6ϵ^αβγδX̃_βγδ. Finally, a simple computation shows that the dual of 𝐗̃ is given by ^*X̃_αβγ=^*X_αβγ-2g_γ[αX_β]. §.§ The structural equation This third preliminary part will be devoted to the proof of the structural equation <cit.> ∇_δλ_αβγ=2(RλδαβK_γλ-Rλδγ[αK_β]λ)+μ_αβγδ Structural equation with μ_αβγδ ≜1/2[K_βγ;(αδ)+K_αδ;(βγ)-K_αγ;(βδ)-K_βδ;(αγ) -3(K_λ[αRλβ]γδ+K_λ[γRλδ]αβ)]. Here, μ_αβγδ possesses the same algebraic symmetries than the Riemann tensor. We remind the reader that λ_αβγ≜ 2∇_[αK_β]γ. We will use indifferently the notations ∇_α𝐓 or 𝐓_;α for the covariant derivative of a tensor 𝐓. The proof goes as a lengthy rewriting of the original expression: ∇_δλ_αβγ =∇_δ∇_α K_βγ-∇_δ∇_β K_αγ =∇_(δ∇_α) K_βγ+∇_[δ∇_α] K_βγ-∇_(δ∇_β) K_αγ-∇_[δ∇_β] K_αγ =1/2∇_(α∇_δ)K_βγ-1/2∇_(β∇_δ)K_αγ+1/2(∇_(α∇_δ)K_βγ-∇_(β∇_δ)K_αγ) +1/2∇_δ∇_αK_βγ-1/2∇_δ∇_βK_αγ. We proceed to the following rewriting of twice the quantity in brackets contained in the above expression: ∇_α∇_δ K_βγ+∇_δ∇_α K_βγ-∇_β∇_δ K_αγ-∇_δ∇_β K_αγ =2∇_α∇_δ K_βγ-2∇_β∇_δ K_αγ+∇_δ∇_αK_βγ-∇_δ∇_βK_αγ =2(∇_β∇_α K_γδ+∇_β∇_γ K_αδ-∇_α∇_β K_γδ-∇_α∇_γ K_βδ) +∇_δ∇_αK_βγ-∇_δ∇_βK_αγ =2(∇_(β∇_γ)K_αδ-∇_(α∇_γ)K_βδ)+∇_δ∇_αK_βγ-∇_δ∇_βK_αγ -2∇_α∇_βK_γδ-∇_α∇_γK_βδ+∇_β∇_γK_αδ. This yields ∇_δλ_αβγ =1/2(K_βγ;(αδ)+K_αδ;(βγ)-K_αγ;(βδ)-K_βδ;(αγ)) +(3/4∇_δ∇_αK_βγ-3/4∇_δ∇_βK_αγ-1/2∇_α∇_βK_γδ -1/4∇_α∇_γK_βδ+1/4∇_β∇_γK_αδ). Let us denote the quantity between parentheses in the last equation. It can be rearranged in the following way: 4 =3∇_δ∇_αK_βγ-3∇_δ∇_βK_αγ-2∇_α∇_βK_γδ -∇_α∇_γK_βδ+∇_β∇_γK_αδ =(3 Rλγδβ-Rλδβγ)K_αλ+(Rλδαγ-3Rλγδα)K_βλ +(3Rλαδβ-3Rλβδα+2Rλδαβ)K_γλ+(2Rλγαβ+Rλβαγ-Rλαβγ)K_δλ =5RλδαβK_γλ+4(RλδαγK_βλ-RλδβγK_αλ) +3(RλαγδK_βλ-RλβγδK_αλ+RλγαβK_δλ) =5RλδαβK_γλ+8Rλδ[α|γK_|β]λ-6K_λ[αRλβ]γδ +3 RλγαβK_δλ-3RλδαβK_γλ+3RλδαβK_γλ_=0 =8RλδαβK_γλ-8 Rλδγ[αK_β]λ-6K_λ[αRλβ]γδ-6 K_λ[γRλδ]αβ. Consequently, =2(RλδαβK_γλ- Rλδγ[αK_β]λ)-3/2(K_λ[αRλβ]γδ+ K_λ[γRλδ]αβ). Inserting this result into Eq. (<ref>) gives the structural equation (<ref>) and consequently concludes the proof. We will end this section by working out the dual form of the structural equation (<ref>). One has ∇_δ^*λ_αβγ =2R*λδαβK_γλ-ϵαβμνRλδγμK_νλ+^*μ_αβγδ =2R*λδαβK_γλ+ϵαβμνR**λδγμK_νλ+^*μ_αβγδ. It is now easier to compute ∇_δ^*λαβγ =2R*λδαβKγλ-1/2ϵ_μαβνϵ^μγρσR*λδρσKνλ+^*μαβγδ =2R*λδαβKγλ+3 δ^[γ_αδ^ρ_βδ_ν^σ]R*λδρσKνλ+^*μαβγδ, which yields ∇_δ^*λ_αβγ =2R*λδαβK_γλ+3 g_γ[α|R*λδ|βν]Kνλ+^*μ_αβγδ =2R*λδαβK_γλ+(R*λδαβK_γλ+R*λδβρK_λρg_αγ -R*λδαρK_λρg_βγ) +^*μ_αβγδ. Rearranging the different terms leads to the final expression ∇_δ^*λ_αβγ=3R*λδαβK_γλ-2R*λδ[αρg_β]γK_λρ +^*μ_αβγδ. Structural equation (dual form) §.§ Some useful lemma about Killing tensors Remove unuseful Lemmas In what follows, 𝐊 will always denote a rank-2 Killing tensor. We aim to derive several useful relations involving such objects. We first notice that, because the assumed symmetry of 𝐊, the equation defining a Killing tensor ∇_(αK_βγ)=0 can be written ∇_α K_βγ+∇_β K_αγ+∇_γ K_αβ=0. Let us denote K≜ Kαα the trace of 𝐊. One has ∇^λ K_αλ=-1/2∇_α K. Contracting Eq. (<ref>) with g^βγ yields ∇_α K+2∇^λ K_αλ=0, which is the desired result. One has ∇_δ∇_αK=0. The proof is direct, by contracting the Ricci identity ∇_δ∇_αK_μν=-RλμδαK_λν-RλνδαK_λμ with the inverse metric g^μν, and using the symmetry properties of the Riemann tensor. We have the symmetry property Rλ(αβ)ρK_λρ=RλαβρK_λρ. The proof is pretty standard, Rλ(αβ)ρK_λρ =1/2(Rλαβρ+Rλβαρ)K_λρ =1/2(Rλαβρ+Rαρλβ)K_λρ =RλαβρK_λρ. One has ∇_λ∇_αKλγ=RλαK_λγ+RλγαρK_λρ. The proof is again direct, contracting the Ricci identity ∇_δ∇_αK_βγ=-RλβδαK_λγ-RλγδαK_λβ with g^δβ and using the symmetry properties of the Riemann tensor. Let us denote Δ≜∇^α∇_α. We have Δ K_βγ=∇_β∇_γ K-2(Rλ(βK_γ)λ+RλβγρK_λρ). Let us take the covariant derivative of equation (<ref>): ∇_δ∇_α K_βγ+∇_δ∇_β K_αγ+∇_δ∇_γ K_αβ=0. Contracting this equation with g^δα yields Δ K_βγ =-∇^λ∇_β K_λγ-∇^λ∇_γ K_λβ =-∇_β∇^λ K_λγ-∇_λ∇_βKλγ -∇_γ∇^λ K_λβ-∇_λ∇_γKλβ =∇_(β∇_γ)K-2 Rλ(βK_γ)λ-2RλβγρK_λρ, which leads to the desired result. §.§ Reduction of the second constraint We will now gather the results obtained in the three previous subsections to express the constraint (<ref>) in terms of the irreducible variables introduced above. Let us remind that Eq. (<ref>) reads as α(βγδ)=0. Using Eqs. (<ref>), (<ref>) and (<ref>), we can rewrite _αβγδ =∇_δ^*X̃_αβγ+1/3∇_δ(^*λ_αβγ+g_γ[α∇_β]*K)-2/3K_λγR*λδαβ+2/3R*λδγ[αK_β]λ =∇_δ(^*X_αβγ-2g_γ[αX_β])+1/3(R*λδαβK_γλ+2R*λδγ[αK_β]λ)_≜ +1/3∇_δ( g_γ[α∇_β]*K)+1/3^*μ_αβγδ-2/3R*λδ[αρg_β]γK_λρ. On the one hand, we have =R*λδαβK_γλ+R*λδγαK_βλ-R*λδγβK_αλ =R*λδβγK_αλ+2R*λδα[βK_γ]λ. And on the other hand, we can write ∇_δ( g_γ[α∇_β]*K) =1/2ϵαβμνg_γμ∇_δ∇_ν K =1/2ϵαβγμ∇_δ∇_μ K. Gathering these expressions leads to _αβγδ =∇_δ^*X_αβγ-2g_γ[α|∇_δ X_|β]-2/3R*λδ[αρg_β]γK_λρ+1/3R*λδβγK_αλ +2/3R*λδα[βK_γ]λ+1/6ϵαβγμ∇_δ∇_μ K+1/3^*μ_αβγδ. When symmetrizing the three last indices, the four last terms of this expression vanish. The constraint (<ref>) takes the final form ^*X_α(βγ;δ)+X_α;(βg_γδ)-g_α(βX_γ;δ) +1/3(g_α(βR*λγδ)ρ-R*λα(βρg_γδ))K_λρ=0. First order constraint for quadratic conserved quantities Our principal motivation being the motion of spinning particles in Kerr spacetime, we will now focus on spacetimes possessing a Killing-Yano (KY) tensor. It will turn out that the constraint (<ref>) can still be dramatically simplified in such a framework. § SPACETIMES ADMITTING A KILLING-YANO TENSOR We now particularize our analysis to spacetimes equipped with a Killing-Yano (KY) tensor, i.e. a rank-2, antisymmetric tensor Y_μν=Y_[μν] obeying the Killing-Yano equation: ∇_(αY_β)γ=0. Killing-Yano equation In this case, the constraint (<ref>) is automatically fulfilled, because K_αβ≜ YαλY_λβ is a Killing tensor. The aim of this section is twofold. First, we will review some general properties of KY tensors useful for the continuation of our analysis. Even if most of them were previously mentioned in the literature <cit.>, the goal of the present exposition is to provide a self-contained summary of these results and of their derivations. Second, we will work out an involved identity that will become a cornerstone for solving the constraint (<ref>). This so-called central identity is the generalization of a result mentioned by Rüdiger in <cit.>. Rüdiger only quickly sketched the proof of his identity, while we aim here to provide a more pedagogical derivation of this central result. §.§ Some general properties of KY tensors Let us review some basic properties of KY tensors, sticking to our conventions and simplifying the notations used in the literature <cit.>. §.§.§ An equivalent form of the KY equation The symmetries ∇_(αY_β)γ=∇_α Y_(βγ)=0 ensure the quantity ∇_α Y_βγ to be totally antisymmetric in its three indices. Consequently, there exists a vector ξ such that ∇_αY_βγ=ϵ_αβγλξ^λ.Killing-Yano equation (equivalent form) The value of ξ can be found by contracting Eq. (<ref>) with ϵ^μαβγ and making use of the contraction formula for the Levi-Civita tensor (<ref>). We obtain ξ^α=-1/3∇_λ Y^*λα. Notice that Eq. (<ref>) with ξ given by Eq. (<ref>) is totally equivalent to the Killing-Yano equation. §.§.§ Dual KY equation Let us derive the equivalent to the Killing-Yano equation for the dual of the KY tensor Y^*_μν. One has ∇_α Y^*βγ =1/2ϵ^βγμν∇_α Y_μν =1/2ϵ^μνβγϵ_μναλξ^λ =-2δ^[β_αδ^γ]_λξ^λ =-2δ^[β_αξ^γ], leading to ∇_αY^*_βγ=-2g_α[βξ_γ]. Conformal Killing-Yano equation This equation is nothing but the conformal Killing-Yano equation for the dual tensor Y^*_μν. This proves Proposition <ref>.1. §.§.§ Integrability conditions for the KY equation We will work out some necessary conditions for the tensor 𝐘 to be a Killing-Yano tensor, i.e. relations that must hold for 𝐘 to satisfy the KY equation. We will refer to them as integrability conditions for the Killing-Yano equation. We begin by proving some preliminary results: One has ∇^αξ_α=0. We proceed by applying the Ricci identity to the expression: ∇^αξ_α =-1/3∇_α∇_λ Y^*λα =-1/6∇_α∇_λY^*λα =1/6g^λμg^αν(RρμαλY^*_ρν+RρναλY^*_μρ) =1/3R^αβY^*_αβ =0. For any antisymmetric tensor A_αβ=A_[αβ], one has RαμνβA^μν=R[αμνβ]A^μν. The proof is straightforward: RβμναA^μν =RναβμA^μν=RαμνβA^νμ =-RαμνβA^μν. For any antisymmetric tensor A_αβ=A_[αβ], we have A_μνRμαβν=-1/2A^μνR_αβμν. The proof is again pretty simple: A_μνRμαβν =A^μν(R_αβνμ+R_ανμβ) =-A^μνR_αβμν+A^μνR_ναβμ. The conclusion is reached using Lemma <ref>. We can now work out the first integrability condition. One has ∇_αξ_β =-1/3∇_α∇_λ Y*λβ =-1/3∇^λ∇_α Y^*_λβ-1/3∇_α∇_λY*λβ =2/3∇^λ(g_α[λξ_β])-1/3g^λρ∇_α∇_λY^*_ρβ =1/3∇_αξ_β-1/3g_αβ∇^λξ_λ+1/3(RλαY^*_λβ+RλβαρY^*_ρλ). This gives rise to the Killing-Yano equation first integrability condition: ∇_αξ_β=1/2(RλαY^*_λβ+RλαβρY^*_λρ)Killing-Yano first integrability condition or, equivalently, using Lemma <ref>: ∇_αξ_β=1/2 RλαY^*_λβ-1/4R_αβμνY^*μν. Symmetrizing this equation gives the reduced form ∇_(αξ_β)=1/2Y^*_λ(αRλβ).Reduced Killing-Yano first integrability condition In particular, for Ricci-flat spacetimes, ξ^μ is a Killing vector. This proves proposition <ref>.5. A second integrability condition can be written as follows. Taking the derivative of the defining equation ∇_(αY_β)γ=0 yields ∇_α∇_β Y_γδ+∇_α∇_γ Y_βδ=0. Out of this equation, we can write the three (equivalent) identities ∇_α∇_β Y_γδ+∇_γ∇_α Y_βδ+∇_α∇_γY_βδ =0, ∇_γ∇_α Y_βδ+∇_β∇_γ Y_αδ+∇_γ∇_βY_αδ =0, ∇_β∇_γ Y_αδ+∇_α∇_β Y_γδ+∇_β∇_αY_γδ =0. Summing the first and the third and subtracting the second leads to 2∇_α∇_β Y_γδ =∇_γ∇_αY_βδ+∇_γ∇_βY_αδ+∇_α∇_βY_γδ =RλδβγY_αλ+RλδαγY_βλ+RλδβαY_γλ -(Rλβγα+Rλαγβ+Rλγαβ)Y_λδ =2RλαβγY_λδ+RλδβγY_αλ+RλδαγY_βλ+RλδβαY_γλ. This gives rise to the second integrability condition ∇_α∇_βY_γδ=RλαβγY_λδ+1/2(RλδβγY_αλ+RλδαγY_βλ+RλδβαY_γλ). Killing-Yano second integrability condition A reduced form can be obtained by contracting the equation above with g^γδ. We obtain 0 =RλαβρY_λρ+1/2(RλβY_αλ+RλαY_βλ+RλρβαY_ρλ). Using Lemma <ref> shows that the first and the fourth terms of the right-hand side of this relation cancel. We obtain the reduced form of the second integrability condition Y_λ(αRλβ)=0. Reduced Killing-Yano second integrability condition We can also symmetrize the indices γδ in Eq. (<ref>) to obtain the symmetrized second integrability condition Rλαβ(γY_δ) λ-1/2Y_λ(γRλδ)αβ+Rλ(γδ)(αY_β) λ=0. Symmetrized Killing-Yano integrability condition §.§ The central identity Our so-called central identity will consists into a clever rewriting of the expression K_λ(βR*λγδ)α, and was first derived in <cit.>. Its reduced version first introduced by Rüdiger <cit.> is a rewriting of the contracted expression K_λρR*λαβρ. §.§.§ The KY scalar Let us define the scalar quantity 𝒵≜1/4Y^*_αβY^αβ. Its first covariant derivative takes the form ∇_μ𝒵 =1/2∇_μ Y^*_αβY^αβ =ξ_[αg_β]μY^αβ =ξ_α Yαμ. Its second covariant derivative can be expressed as (notice that ∇_α∇_β𝒵=∇_β∇_α𝒵) ∇_μ∇_ν𝒵 =∇_μξ_α Yαν+ξ^αϵ_μανρ ξ^ρ =∇_μξ_α Yαν=∇_νξ_α Yαμ, where the last equality follows from symmetry of the right-hand side. The following identity is also useful: Y_α[βY_γ]δ=-1/2Y_βγY_αδ-1/2𝒵 ϵ_αβγδ. The proof consists into noticing that the combination Y_αβY_γδ-Y_αγY_βδ+Y_βγY_αδ is totally antisymmetric in all its indices. It must consequently be proportional to the Levi-Civita tensor: Y_αβY_γδ-Y_αγY_βδ+Y_βγY_αδ=𝒜 ϵ_αβγδ, where the constant 𝒜 remains to be determined. This is achieved by contracting the equation above with ϵ^αβγδ, yielding 3ϵ^αβγδY_αβY_γδ=-4! 𝒜. Using the definition of 𝒵 leads to 𝒜=-𝒵, which gives the desired result. §.§.§ Derivation of the central identity The trick for deriving the central identity is to define the tensor 𝒯_μνρσ≜ϵ_μαβσ∇^α∇^β Y_νλYλρ, to perform two different rewritings of this expression and finally to equate them. Notice that we recover the tensor used in Rüdiger's proof <cit.> by contracting the last two indices of Eq. (<ref>). First, applying the Ricci identity to Eq. (<ref>) and making use of Eq. (<ref>) yields 𝒯_μνρσ =1/2ϵ_μαβσ∇^α∇^βY_νλYλρ =R*κνμσK_κρ-R*κλμσY_ν[κY_λ]ρ =-R*κνσμK_κρ+1/2R*κλμσ(Y_κλY_νρ+𝒵ϵ_νκλρ). Second, using Eqs. (<ref>), (<ref>) and Lemma <ref>, we rewrite 𝒯_μνρσ as follows: 𝒯μνρσ =-1/2ϵ^μαβσϵ_νλγδ∇_α∇_β Y^*γδYλρ =-12δ^[σ_ν Yμρ∇_α∇_β Y^*αβ] =-δ^σ_ν(Yμρ∇_α∇_β Y^*αβ+Yαρ∇_α∇_β Y^*βμ+Yβρ∇_α∇_β Y^*μα) + δ^μ_ν(Yσρ∇_α∇_β Y^*αβ+Yαρ∇_α∇_β Y^*βσ+Yβρ∇_α∇_β Y^*σα) -δ^α_ν(Yσρ∇_α∇_β Y^*μβ+Yμρ∇_α∇_β Y^*βσ+Yβρ∇_α∇_β Y^*σμ) +δ^β_ν(Yσρ∇_α∇_β Y^*μα+Yμρ∇_α∇_β Y^*ασ+Yαρ∇_α∇_β Y^*σμ) =δ^σ_ν(3Yαρ∇_αξ^μ+2Yβρδ^[μ_β∇_αξ^α]) -δ^μ_ν(3Yαρ∇_αξ^σ+2Yβρδ_β^[σ∇_αξ^α]) -3Yσρ∇_νξ^μ+3Yμρ∇_νξ^σ+2Yβρδ_β^[σ∇_νξ^μ] -2(Yσρδ_ν^[μ∇_αξ^α]+Yμρδ_ν^[α∇_αξ^σ]+Yαρδ_ν^[σ∇_αξ^μ]) =δ^σ_ν Yλρ∇_λξ^μ-δ^μ_ν Yλρ∇_λξ^σ+ Yμρ∇_νξ^σ-Yσρ∇_νξ^μ. Equating the two expressions of 𝒯_μνρσ obtained leads to R*λνμσK_ρλ =-1/2R*κλμσ(Y_κλY_νρ+𝒵ϵ_νκλρ)-g_μνYλρ∇_λξ_σ +g_νσYλρ∇_λξ_μ-Y_σρ∇_νξ_μ+Y_μρ∇_νξ_σ. This is the cornerstone equation for deriving both the central identity and its reduced form. §.§.§ Non-reduced form Fully symmetrizing Eq. (<ref>) in (μνρ) leads to K_λ(βR*λγδ)α=-Yλ(βg_γδ)ξ_α;λ+g_α(βYλγξ_δ);λ-Y_α(βξ_γ;δ). We make use of the reduced integrability condition (<ref>) to write the last term as Y_α(βξ_γ;δ) =1/2Y_α(β|Y^*_λ|γRλδ) =1/2Y_α(βY*λγG_δ)λ+R/4Y_α(βY^*_γδ) =1/2Y_α(βY*λγG_δ)λ where G_αβ≜ R_αβ-R/2g_αβ is the Einstein tensor. This gives rise to the central identity: K_λ(βR*λγδ)α=-Yλ(βg_γδ)ξ_α;λ+g_α(βYλγξ_δ);λ-1/2Y_α(βY*λγG_δ)λ. Central identity §.§.§ Reduced form Rüdiger's reduced form of the central identity can be derived by contracting Eq. (<ref>) with g^ρσ and using Eqs. (<ref>) and (<ref>): R*λνμρK_λρ =-1/2R*κλμρ(Y_κλY_νρ+𝒵ϵ_νκλρ)-g_μνY^λρ∇_λξ_ρ+Yλν∇_λξ_μ+Y_μρ∇_νξ^ρ =-1/2R*κλμρ(Y_κλY_νρ+𝒵ϵ_νκλρ)+g_μνY^λρ∇_ρξ_λ-Yλν∇_μξ_λ +YλνY^*_ρ(λRρμ)+Y_μρ∇_νξ^ρ =-1/2R*κλμρ(Y_κλY_νρ+𝒵ϵ_νκλρ)-2∇_μ∇_ν𝒵+g_μνΔ𝒵+YλνY^*_ρ(λGρμ). The first term of the above equation can be simplified by noticing that, on the one hand, making use of Lemma <ref>, 1/2R*κλμρY_κλY_νρ = 1/4ϵμαβλRσραβY_ρσY_λν =-1/4ϵμαβλRσραβY^**_ρσY_λν =-1/8ϵ_μαβλϵ^ρσγδRαβσρY^*_γδYλν =3Rαβ[αμY^*_βλ]Yλν =1/4Yλν(4RβμY^*_βλ+4RβλY^*_μβ-2RαβμλY^*_αβ-2RY^*_μλ) =1/4Yλν[4(RαμλβY^*_αβ+RβμY^*_βλ)+4RβλY^*_μβ-2RY^*_μλ] (<ref>)=2Yλν∇_μξ_λ+RβλYλνY^*_μβ-1/2RYλνY^*_μλ =2∇_μ∇_ν𝒵+GβλYλνY^*_μβ. On the other hand, the term 1/2R*κλμρ𝒵ϵ_νκλρ can be reduced thanks to the identity 1/4𝒵ϵ^μαβλϵ_λρσνRσραβ =3/2𝒵δ^μ_[ρRσρσν] =1/2𝒵(2 Rμν-δ^μ_ν R) =𝒵 Gμν. Putting all pieces together, we obtain the reduced central identity R*λνμρK_λρ =-4∇_μ∇_ν𝒵+g_μνΔ𝒵+Y*λ(ρG_μ)λYρν -GβλYλνY^*_μβ-𝒵 G_μν. Reduced central identity This is the generalization of Rüdiger's reduced central identity <cit.> to non Ricci-flat spacetimes. Notice that the occurrence of Einstein tensor in the relation Eq. (<ref>) that we derived suggests that Rüdiger's quadratic invariant will admit a generalization to the Kerr-Newmann spacetime which also admits a Killing-Yano tensor, once the Einstein tensor is replaced with the electromagnetic stress-energy tensor. This remains to be investigated. §.§.§ Central identity in Ricci-flat spacetimes Let us now particularize our analysis to vacuum spacetimes, i.e. Ricci-flat spacetimes R_αβ=G_αβ=0. This includes in particular the astrophysically relevant Kerr spacetime. Using the reduced integrability condition (<ref>), the central identity (<ref>) becomes K_λ(βR*λγδ)α =-Yλ(βg_γδ)ξ_α;λ+g_α(βYλγξ_δ);λ =Yλ(βg_γδ)ξ_λ;α-g_α(βYλγ|ξ_λ;|δ). Making use of Eq. (<ref>), it takes the final form K_λ(βR*λγδ)α=∇_α∇_(β𝒵 g_γδ)-g_α(β∇_γ∇_δ)𝒵, Central identity in Ricci-flat spacetimes which does not appear in Rüdiger <cit.>. The reduced central identity (<ref>) becomes R*λμνρK_λρ=-4∇_μ∇_ν𝒵+g_μνΔ𝒵, Reduced central identity in Ricci-flat spacetimes as obtained by Rüdiger <cit.>. § SOLUTIONS TO THE CONSTRAINT IN RICCI-FLAT SPACETIMES In this Section, one will gather the results obtained in the two previous sections of this chapter. Our aim will be to solve the constraint (<ref>) in Ricci-flat spacetimes admitting a KY tensor. The Ricci-flatness assumption will enable us to make use of the simple expressions provided by the central identity (<ref>) and its reduced form (<ref>). §.§ Simplification of the constraint Plugging Eq. (<ref>) into Eq. (<ref>) leads – after a few easy manipulations – to the constraint ^* X_α(βγ;δ)+∇_(β|[X_α+4/3∇_α𝒵]g_|γδ)-g_α(β∇_δ[X_γ)+4/3∇_γ)𝒵]=0. It is straightforward to see that it admits the following non-trivial solution X_αβγ=0, X_α=-4/3∇_α𝒵. As we will see later, this solution will lead to Rüdiger's invariant. In what follows, we will seek a more general solution to Eq. (<ref>), which does not assume X_αβγ=0. The first step is to simplify more the constraint (<ref>). We begin by rewriting it in the much simpler form [^* X_α(βγ+Y_α g_(βγ-g_α(βY_γ]_;δ)=0 by introducing the shifted variable Y_α≜ X_α+4/3∇_α𝒵. We subsequently rewrite Eq. (<ref>) in order to remove the dual operator from the tensor 𝐗. Let us define the Hodge dual Ỹ_αβγ of Y_α: Ỹ_αβγ≜ϵ_μαβγY^μ ⇔ Y_α≜-1/6ϵ_αμνρỸ^μνρ. Contracting Eq. (<ref>) with ϵ^αμνρ and using the usual properties of the Levi-Civita tensor, we get -3δ^[μ_(βXνρ]γ;δ)+Ỹμνρ;(βg_γδ)-ϵ(βμνρY_γ;δ)=0. Now, using the fact that ϵ^βμνρY_γ;δ=4δ_γ^[βỸμνρ];δ, the last term of the previous equation reads -ϵ(βμνρY_γ;δ)=-g_(βγỸμνρ;δ)+3δ^[μ_(γỸνρ]β;δ). Putting all pieces together, Eq. (<ref>) becomes equivalent to δ_(β^[μ(Xνρ]γ-Ỹνρ]γ)_;δ)=0. It follows from Eq. (<ref>) and from the definition of L_αβγ=L_[αβ]γ that X_αβγ is antisymmetric on its two first indices, X_αβγ=X_[αβ]γ. Let us decompose X_αβγ into its traces and trace-free parts: X_αβγ=A_α g_βγ+B_βg_αγ+C_γ g_αβ+D^tf_αβγ. Note that the constraint X_[αβγ]=0 reduces to D^tf_[αβγ]=0. Moreover, since X_αβγ=X_[αβ]γ, it implies that one can set C_γ=0. Plugging the above decomposition into Eq. (<ref>) and using the identity δ_(α^[μδ_β)^ν]=0, the constraint (<ref>) becomes δ^[μ_(β(Dtf νρ]γ-Ỹνρ]γ)_;δ)=0. This implies that the 1-forms A_α and B_β determining the trace part of X_αβγ are left unconstrained. However, these trace parts produce terms into the conserved quantity (<ref>) containing a p_μ S^μν factor, which vanish due to the spin supplementary condition (<ref>). All in all, we can, without loss of generality, set A_α=B_β=0=C_γ, i.e. consider that X_αβγ reduces to its traceless part. The constraint equation can be written in the short form δ_(β^[μWνρ]γ;δ)=0, where we have defined W_αβγ≜ D^tf_αβγ-Ỹ_αβγ, which is by definition traceless and antisymmetric in its two first indices. Contracting the constraint (<ref>) with -1/2ϵ_μνρα leads to the equivalent condition ^*W_α(βγ;δ)=0, where ^*W_αβγ= ^*D^tf_αβγ-2g_γ[αY_β]. Given a solution to the simplified constraint (<ref>), the quasi-invariant K_μνp^μ p^ν + L_αβγS^αβp^γ can be constructed using Eqs. (<ref>), (<ref>) and (<ref>) which leads to L_αβγ=D^tf_αβγ +1/3λ_αβγ+ϵ_αβγδ(Y^δ - 4/3∇^δ Z). §.§ The Rüdiger quasi-invariant 𝒬_R Let us first recover Rüdiger's quasi-invariant <cit.>. In terms of our new variables, Rüdiger's solution (<ref>) is simply D^tf_αβγ=0, Y_α=0. Substituting in (<ref>), it leads to the quasi-invariant 𝒬_R =K_μνp^μ p^ν+1/3λ_μνρS^μνp^ρ -4/3ϵ_μνρσS^μνp^ρ∇_σ𝒵. Let us work on each term of the right-hand side separately: * We recall the definition L_α≜ Y_αλp^λ. The definition (<ref>) leads to K_μνp^μ p^ν=L_α L^α . * Using the definitions (<ref>) and (<ref>) and the properties (<ref>) and (<ref>) we have λ_μνρS^μνp^ρ = 2∇_μ K_νρS^μνp^ρ =2(ϵ_μνλσϵ^μναβYλρ+ϵ_μλρσϵ^μναβYνλ)ξ^σp̂_α S_β p^ρ =-2(4Yλρξ^σp̂_[λS_σ]p^ρ+6Y[λλp̂_ρ S_σ]ξ^σ p^ρ) =2(3 Yλρp^ρξ^σp̂_σ S_λ+YσλS_λξ^σp̂_ρ p^ρ-Yσλp̂_λξ^σ S_ρ p^ρ_=0) =-2μξ^σ YσλS_λ+6μ^-1 L_α S^αξ_β p^β =-2μ S^α∂_α𝒵+6μ^-1 L_α S^αξ_β p^β. Using (<ref>), the factor L_α S^α can be written in a much more enlightening way: L_α S^α =Y_αλp^λ S^α=Y^*_αλp^[αS^λ]*=-μ/2Y^*_αλS^αλ =-μ/2𝒬_Y, where 𝒬_Y is Rüdiger's linear invariant <cit.>, which is also conserved up to linear order in the spin. * Using the definition of the spin vector (<ref>) together with the spin supplementary condition allows to write ϵ_μνρσ S^μνp^ρ∇^σ𝒵 =4δ_ρ^[αδ_σ^β]∇^σ𝒵p̂_α S_β p^ρ =4p̂_[ρS_σ]∇^σ𝒵 p^ρ =2S_σ∇^σ𝒵p̂_ρ p^ρ =-2μ S^α∂_α𝒵. Putting all the pieces together, we get the following form for 𝒬_R, 𝒬_R=L_αL^α+2μS^α∂_α𝒵-𝒬_Y ξ_βp^β. Rüdiger quadratic invariant which is identically the quadratic Rüdiger invariant <cit.>. For Ricci-flat spacetimes, ξ^μ is a Killing vector and ξ_α p^α can be upgraded to an invariant with a 𝒪(𝒮^1) correction. This implies that the last term in (<ref>) is trivial since it is a product of quasi-invariant at linear order in 𝒮. §.§ Trivial solutions to the algebraic constraints Let us now discuss more general solutions to the simplified algebraic constraints (<ref>). By linearity, we can substract the Rüdiger quasi-invariant (<ref>) from the definition of quasi-invariants (<ref>) where L_αβγ is given in Eq. (<ref>). Given a solution to the simplified algebraic constraints (<ref>), one therefore obtains a quasi-invariant of the form 𝒬(D_αβγ^tf,Y_α) =(D^tf_αβγ+ϵ_αβγλY^λ)S^αβp^γ =2( ^*D^tf_αβγ+ ^*ϵ_αβγλY^λ)S^αp̂^β p^γ =2μ ^*D^tf_α(βγ)S^αp̂^βp̂^γ-2μ S_α Y^α after using Eq. (<ref>). Such quasi-invariant is homogeneous in S. Looking at (<ref>), it is appealing to first attempt to generate a quasi-invariant through solving the stronger algebraic constraint ^*W_α(βγ)=0. However, this procedure only leads to identically vanishing quasi-invariants. Indeed, by virtue of Eq. (<ref>), one then has ^*D^tf_α(βγ)=g_α(γY_β)-g_βγY_α. This yields 𝒬(D_αβγ^tf,Y_α) =-2μ S_α Y^αp̂_βp̂^β-2μ S_α Y^α=0. This implies that new non-trivial quasi-invariants can only be generated by making a non-trivial use of the symmetrized covariant derivative _;δ) in Eq. (<ref>). §.§ Invariants homogeneously linear in 𝒮 Non-trivial invariants of the form (<ref>) can only be generated by finding a solution to the differential constraint (<ref>). As we will see in this section, this problem and the form of the generated invariant can be recast in a very simple form. The first step is to express the invariant (<ref>) as a function of ^*W_αβγ only. Contracting Eq. (<ref>) with g^βγ yields Y_α=1/3 ^*Wαλλ. Rearranging Eq. (<ref>) and making use of Eq. (<ref>), we get ^*D_αβγ^tf= ^*W_αβγ+2/3g_γ[α ^*Wβ]λλ. Plugging these two expressions into Eq. (<ref>) and using the orthogonality condition p̂_α S^α=0 leads to the simple expression 𝒬(W_αβγ)=2μ ^*W_α(βγ)S^αp̂^βp̂^γ. Now, because the dualization is an invertible operation, the giving of ^*W_α(βγ) is equivalent to the giving of the symmetrized part in its two last indices of a rank-3 tensor N_αβγ antisymmetric in its two first indices, such that N_αβγ≡ ^*W_αβγ, which obeys N_α(βγ;δ)=0. Note that one cannot impose that N_αβγ is also symmetric in βγ otherwise it would vanish because one would have N_αβγ=-N_βαγ=-N_βγα=N_γβα=N_γαβ=-N_αγβ=-N_αβγ. The tensor T_αβγ≜ N_α(βγ) obeys the cyclic identity T_αβγ+T_βγα+T_γαβ=0 but since T_αβγ is not symmetric in its two first indices, it is not totally symmetric and the condition defining the mixed-symmetry tensor (<ref>) is distinct from the condition defining a Killing tensor K_(αβγ;δ)=0 where K_αβγ is totally symmetric K_(αβγ)=K_αβγ. In terms of representation of the permutation group, T_αβγ is a {2,1} Young diagram. Consequently, the following statement holds: Let (ℳ,g_μν) be a (3+1)-dimensional Ricci-flat spacetime admitting a Killing-Yano tensor such that there exists on ℳ a mixed-symmetry Killing tensor, i.e., a rank-3 tensor T_αβγ which is a {2,1} Young tableau, i.e., built as T_αβγ= N_α(βγ) such that N_αβγ=N_[αβ]γ, satisfying the (differential) constraint T_α(βγ;δ)=0. Then, the quantity 𝒩≜ T_αβγS^αp̂^βp̂^γ is a (homogeneously linear in 𝒮) quasi-invariant for the linearized MPT equations on ℳ, i.e. 𝒩τ=𝒪(𝒮^2). §.§ Trivial mixed-symmetry Killing tensors If a mixed-symmetry Killing tensor T_αβγ is found, the only point to be addressed before claiming the existence of a new quasi-invariant is to check its non-triviality, i.e., it should not be the product of two others quasi-invariants. We define a trivial mixed-symmetry Killing tensor T_αβγ as mixed-symmetry Killing tensor that generates a trivial quasi-invariant, i.e. an invariant which can be written as the product of other quasi-invariants.[It remains to be investigated if such a tensor necessarily takes the form of a sum of direct products of tensors such that Eq. (<ref>) holds. We have not found any simple argument for proving this assertion.] A non-trivial mixed-symmetry Killing tensor is a mixed-symmetry Killing tensor which is not trivial. If the spacetime admits a Killing-Yano tensor and a Killing vector, we can construct a trivial mixed-symmetry Killing tensor of the form T_αβγ=Y_α(βξ_γ ) . built as T_αβγ = N^(1)_α(βγ)=N^(2)_α(βγ) from either N^(1)_αβγ=Y_αβξ_γ , or N^(2)_αβγ=Y_αγξ_β - Y_βγξ_α . It is straightforward from Eq. (<ref>) and the Killing equation that they obey (<ref>). Now, for this T_αβγ the associated quasi-invariant is the following product of quasi-invariants: 𝒬=-1/2𝒬_Y ξ^αp̂_α , where the linear Rüdiger quasi-invariant 𝒬_Y is defined in Eq. (<ref>) and obeys Eq. (<ref>). The question of existence of quasi-invariants beyond the ones found by Rüdiger therefore amounts to determine the existence of non-trivial mixed-symmetry Killing tensors. In order to gain some intuition about mixed-symmetry Killing tensors, it is useful to derive the general such tensor in Minkowski spacetime. For a Riemann flat spacetime, the covariant derivative becomes a coordinate derivative in Minkowskian coordinates (in any dimension). We can therefore consider the index α in Eq. (<ref>) as a parametric index and consider the list of 2-components objects K_(α)βγ=T_αβγ symmetric under the exchange of βγ, parametrized by α. The constraint (<ref>) is then equivalent to the Killing tensor equation for each K_(α)βγ, α fixed. Since all Killing tensors in Minkowski spacetime are direct products of Killing vectors, we can write K_(α)βγ as a sum of terms of the form K_(α)βγ=M_(α)(i)(j)ξ^(i)_βξ^(j)_γ where (i),(j) label the independent Killing vectors. In order to respect the symmetries of a mixed-symmetry tensor we can write in particular the resulting T_αβγ as a linear combination of terms X_α(βξ_γ) where ξ^γ is a Killing vector. The symmetry properties of a mixed-symmetry tensor imply that X_αβ=X_[αβ]. The constraint (<ref>) finally reduces to the condition that X_αβ is a Killing-Yano tensor. We have therefore proven that any mixed-symmetry tensor in Minkowski spacetime takes the trivial form (<ref>). Now, Kerr spacetime admits a non-trivial Killing tensor and curvature and further analysis is required. We will address the existence of a mixed-symmetry tensor for Kerr spacetime in the following. § LINEAR INVARIANTS FOR KERR SPACETIME We will now derive the explicit expressions for the various invariants discussed above particularized to Kerr spacetime, which is a Ricci-flat spacetime admitting an irreducible Killing-Yano tensor as well as two Killing vectors. In order to simplify the expressions obtained, we introduce the tetrad[Following <cit.>, we've introduced the symbol “.=”, whose meaning is “the tensorial object of the left-hand side is represented in Boyer-Lindquist coordinates by the components given in the right-hand side”. Similarly, we introduce the symbol “,=”, bearing the same meaning, but in the tetrad basis.] <cit.> e0μ .=(√(Δ/Σ), 0, 0, -asin^2θ√(Δ/Σ)), e1μ .=(0, √(Σ/Δ), 0, 0), e2μ .=(0, 0, √(Σ), 0), e3μ .=(-a/√(Σ)sinθ, 0, 0, r^2+a^2/√(Σ)sinθ). We use lower-case Latin indices to denote tetrad components, e.g. eaμ=(e0μ,e1μ,e2μ,e3μ). The tetrad components V^a of any vector 𝐕 are given by V^a=eaμ V^μ. Tetrad indices are lowered and raised with the Minkowski metric η_ab=diag(-1, 1, 1, 1). We choose the convention ϵ^t r θφ=+1. In this tetrad basis, the Kerr Killing-Yano tensor and its dual take the elegant form 1/2 Y_ab x^a∧ x^b=acosθ x^1∧ x^0+r x^2∧ x^3, 1/2Y^*_ab x^a∧ x^b=r x^1∧ x^0+acosθ x^3∧ x^2. §.§ Known quasi-conserved quantities We now explicit the various quasi-invariants for the linearized MPTD equations on Kerr spacetime. We recall that the norms of the vector-variables μ^2=-p_a p^a, S^2≜ S_a S^a are conserved along the motion. In what follows, we will work with the dimensionful quantities ℰ_0, ℒ_0, 𝒦_0. They are related to the proper quantities E_0, L_0 and K_0 through ℰ_0=μ E_0, ℒ_0=μ L_0, 𝒦_0=μ^2 K_0. This is totally equivalent, since the “proper” versions of the constants of motion are obtained from the non-proper ones through the replacements p^μ→p̂^μ=p^μ/μ, S^μν→S^μν/μ, S^μ→S^μ/μ. §.§.§ Invariants generated by Killing fields From the existence of the two Killing fields ξ^μ and η^μ, one can construct two linear invariants, namely the energy ℰ≜ -𝒞_ξ and the projection of the angular momentum along the direction of the black hole spin, ℒ≜𝒞_η where 𝒞_ξ is defined in Eq. (<ref>). Their explicit expressions are given by <cit.> ℰ =ℰ_0+M/Σ^2[(r^2-a^2cos^2θ)S^10-2arcosθ S^32], ℒ =ℒ_0 + a sin^2θ/Σ^2[(r-M)Σ+2Mr^2]S^10+a√(Δ)sinθcosθ/ΣS^20 +r√(Δ)sinθ/ΣS^13 +cosθ/Σ^2[(r^2+a^2)^2-a^2Δsin^2θ]S^23. Here, ℰ_0 and ℒ_0 denote respectively the geodesic energy and angular momentum ℰ_0 =-ξ_μ p^μ=-p_t=√(Δ/Σ)p^0+asinθ/√(Σ)p^3, ℒ_0 = η_μ p^μ=p_φ= asin^2θ√(Δ/Σ)p^0+(a^2+r^2)sinθ/√(Σ)p^3. §.§.§ Linear Rüdiger quasi-invariant An elegant expression for 𝒬_Y (<ref>) is given through expressing the spin tensor components in the tetrad frame. Using Eq. (<ref>), we find that Rüdiger linear quasi-invariant is given by the simple expression 𝒬_Y=2(rS^10+acosθS^32). Linear Rüdiger invariant in Kerr spacetime §.§.§ Quadratic Rüdiger quasi-invariant An explicit expression for Rüdiger quadratic invariant (<ref>) is obtained by evaluating the three scalar products L_α L^α, S^α∂_α𝒵 and ξ_α p^α. First, the product L_α L^α reduces to the usual Carter constant 𝒦_0, L_α L^α = K_αβp^α p^β = 𝒦_0. Second, a direct computation reveals that the Killing vector ξ^α (<ref>) reduces to the timelike Kerr Killing vector: ξ^α.=(1, 0, 0, 0). The product ξ_α p^α is consequently equal to minus the geodesic energy ℰ_0: ξ_α p^α=-ℰ_0. Only the factor S^α∂_α𝒵 remains to be computed. The Killing-Yano scalar 𝒵 (<ref>) reads 𝒵=-arcosθ. The simplest expression for S^α∂_α𝒵 is provided through expressing the components of the spin vector in Boyer-Lindquist coordinates: S^α∂_α𝒵 =a(r sinθ S^θ - cosθ S^r). In terms of the spin tensor, we get the more involved expression S^α∂_α𝒵 = -3 a/μ√(Σ)(√(Δ)cosθ p^[0S^23]+r sinθ p^[0S^13]). Rüdiger's quadratic invariant (<ref>) consequently takes the form 𝒬_R=𝒦_0+2μa (rsinθS^θ-cosθS^r)+𝒬_Y ℰ_0. Rüdiger quadratic invariant in Kerr spacetime In summary, we have in our possession four quasi-constants of motion (in addition to μ^2 and 𝒮): ℰ, ℒ, 𝒬_Y and 𝒬_R that are conserved along the flow generated by the MPTD equations at linear order in 𝒮. They consequently form a set of four linearly independent first integrals for the linearized MPTD equations which, however, are not in involution, as will be later proven in Chapter <ref>. §.§ Looking for mixed-symmetry tensors Let us now address the question of the existence of a non-trivial mixed-symmetry Killing tensor, i.e. a tensor T_αβγ obeying the symmetries of a {2,1} Young tableau and obeying Eq. (<ref>), on Kerr spacetime. It is highly relevant because such the existence of such a tensor is in one-to-one correspondence with the existence of another independent quasi-conserved quantity that could, possibly, be in involution with the others first integrals and consequently leading the system to be integrable. All the computations mentioned below being very cumbersome, they will not be reproduced here but are encoded in a Mathematica notebook, which is available on simple request. From the symmetry in (βγ) alone, there are 4 × 10=40 components in T_αβγ in 4 spacetime dimensions but they are not all independent because 𝐓 is defined from 𝐍 which is antisymmetric in its first indices. From the cyclic identity (<ref>) we deduce the following: (i) The components of T_αβγ with all indices set equal to i=1, …, 4 is 0, T_iii=0 (4 identities); (ii) In the presence of two distinct components i,j=1, … , 4 we have 2T_iij+T_jii=0 (12 identities); (iii) In the presence of three distinct components i,j,k=1,… ,4 we have T_ijk+T_jki+T_kij=0 (4 identities). There are no further algebraic symmetries. There are therefore 20 independent components, which we can canonically choose to be the union of the sets of components 𝒯_2 (of order |𝒯_2 |=12) and 𝒯_3 (of order |𝒯_3 |=8) that are defined as 𝒯_2 ={ T_ijj | i,j=1,… , 4, j ≠ i }, 𝒯_3 = {T_123,T_213,T_124,T_214,T_134,T_314,T_234,T_324}. The constraints (<ref>) are 64 equations which can be splitted as follows: (i) 4 equations with 4 distinct indices T_i(jk;l)=0; (ii) 12 equations with 3 distinct indices of type T_i(ij;k)=0; (iii) 24 equations with 3 distinct indices of type T_i(jj;k)=0; (iv) 12 equations with 2 distinct indices of type T_i(jj;j)=0 and (v) 12 equations with 2 distinct indices of type T_i(ii;j)=0. Let us now specialize to the Kerr background and impose stationarity and axisymmetry, ∂_t T_αβγ=∂_φ T_αβγ=0. In that case, the 4 equations T_r(tt;φ)=0, T_r(φφ;t)=0, T_θ (tt;φ)=0, T_θ(φφ;t)=0 and the 6 equations T_t(φφ;φ)=0, T_r(φφ;φ)=0, T_θ(φφ;φ)=0, T_r(tt;t)=0, T_θ(tt;t)=0, T_φ(tt;t)=0 are algebraic, and can be algebraically solved for 10 out of the 20 variables. There are two additional combinations of the remaining equations that allow to algebraically solve for 2 further variables. After removing redundant equations, we can finally algebraically reduce the system of 64 equations in 20 variables to 13 partial differential equations in 8 variables. Further specializing to the Schwarzschild background, we found the general solution to the thirteen equations. There are exactly two regular solutions which are both trivial mixed-symmetry tensors of the form (<ref>) with either ξ = ∂_t or ξ = ∂_φ. Note that if we relax axisymmetry, there are two further trivial mixed-symmetry tensors (<ref>) with ξ given by the two additional (3) vectors. For Kerr, we did not find the general solution of the 13 partial differential equations in 8 variables. However, we obtained the most general perturbative deformation in a of the 2-parameter family of trivial mixed-symmetry tensors of the form (<ref>) with ξ = ∂_t and ∂_φ, assuming stationarity and axisymmetry. We obtained that the most general deformation is precisely a linear combination of the two trivial mixed-symmetry tensors of the form (<ref>) with ξ = ∂_t and ∂_φ. Moreover, we also checked that there does not exist a consistent linear deformation of a linear combination of the two φ-dependent (3) Schwarzschild trivial mixed-symmetry tensors. Since we physically expect continuity of the quasi-conserved quantities between Kerr and Schwarzschild, we ruled out the existence of new stationary and axisymmetric quasi-conserved quantities of the MPTD equations. CHAPTER: SOLVING THE CONSTRAINTS: SECOND ORDER IN THE SPIN Second order in the spin In this chapter, we will look again at the conservation equations for the two Ansätze Eqs. (<ref>). This time, we will consider the full MPTD equations (<ref>) up to second order in 𝒮. We assume the force and torque terms to be generated by the presence of a spin-induced quadrupole term, given by Eq. (<ref>). We will heavily base ourselves on the quantities conserved at linear order that were discussed in the previous chapter, building our second order conserved quantities as perturbations of the former. Before starting, two important points shall be raised: first, by contrast to the linearized case, we will have to specialize to Kerr background to be able to work out explicit solutions to the conservation equations. As we will see, a crucial feature for the analytic solving of the problem is the existence of covariant building blocks for all Kerr tensorial quantities. In this formulation, all differential relations will reduce to algebraic identities. This will enable to turn the task of solving the constraint equations to a purely algebraic, enumerative problem. Second, the constraints will now explicitly depend upon the quadrupole coupling κ. This reflects the fact that the universality of the motion of spinning test bodies present at first order is now broken, since κ depends upon the nature of the body. Moreover, the value of the coupling κ will appear to be of prime importance: one will only be able to find generalizations of the constants 𝒬_Y and 𝒬_R if the body's quadrupole coupling is itself the one of a Kerr black hole, κ=1. We will proceed as follows. We start by developing the constraint equations for the existence of the conserved quantities in tensorial form in Section <ref>. In Section <ref>, we discuss covariant algebraic and differential relations amongst basic fields, “covariant building blocks,” characterizing the Kerr geometry, which play a central role in our solutions to the constraint equations. We use these to reduce the tensorial constraint equations to a system of scalar equations in Section <ref>, and we derive our solutions for the special black-hole case κ=1 in Section <ref>. In Section <ref>, we investigate the case κ 1, for non-black-hole bodies such as neutron stars with spin-induced quadrupoles, concluding that there is no solution to the constraints for κ 1. Finally we summarize our findings in Section <ref>. § CONSTRAINT EQUATIONS: TENSORIAL FORMULATION In this section, we will apply Rüdiger's procedure to derive the tensorial constraint equations that should be obeyed for ensuring the conservation of the two quantities Eqs. (<ref>). Form now on, we take the background spacetime to be (M,a). §.§ Linear constraint Let us begin by looking again at the linear invariant (<ref>). Recall that is takes the form 𝒬^(1)≜ X_μ p^μ+W_μνS^μν with W_μν a skew-symmetric tensor. Applying Rüdiger's procedure, the conservation equation 𝒬̇^(1)=𝒪(𝒮^3) reduces to the following set of equations: [0,2]: ∇_μ X_νp̂^μp̂^ν=𝒪(𝒮^3), [1,2]: ∇_μ W^*_ανs^αp̂^μp̂^ν-1/2X^λ R^*_λνβρs^βp̂^νp̂^ρ=𝒪(𝒮^3), [2,2]: κ/2μX^λ∇_λ R_ναβρs^α s^βp̂^νp̂^ρ+Y_μνℒ^*μν=𝒪(𝒮^3), [2,4]: (∇_λ X_μ-2 W_λμ)(μ Dλν-ℒλν)p̂^μp̂^ν=𝒪(𝒮^3). We therefore have the following proposition: For any pair (X_μ,W_μν) satisfying the constraint equations (<ref>) and assuming the MPD equations (<ref>) are obeyed, the quantity 𝒬^(1) (<ref>) will be conserved up to second order in the spin parameter, i.e. 𝒬̇^(1)=𝒪(𝒮^3). The two first equations (<ref>)-(<ref>) arise at zeroth and first order in 𝒮, and are identical to Eqs. (<ref>) encountered at linear order. The conserved quantities 𝒞_X and 𝒬_Y found at linear order are therefore the possible candidates for being conserved at this order. We shall examine the two cases independently: * For 𝒞_X, Eq. (<ref>) is the only constraint which is not trivially vanishing. also holds for this value of Y_αβ. Using the explicit form of the torque for the spin-induced quadrupole allows to prove that this equation is well obeyed regarless to the value of κ. * In the case X_μ=0, the constraint equations (<ref>) reduce to [1,2]: ∇_μ Y_ανs^αp̂^μp̂^ν=𝒪(𝒮^3), [2,2]: Y_μνℒ^*μν=𝒪(𝒮^3), [2,4]: W_λμ(μ Dλν-ℒλν)p̂^μp̂^ν=𝒪(𝒮^3). Eq. (<ref>) is unchanged and still enforces Y_μν to be a Killing-Yano tensor. The two additional conditions Eqs. (<ref>)-(<ref>) are more involved and will be discussed in Section <ref>. §.§ Quadratic constraint We now turn to the constraint equations for the quadratic invariants. It is useful to write the Ansatz for the quadratic quantity Eq. (<ref>) as a 2 correction added to Rüdiger invariant 𝒬_R. In doing so, we do not loose in generality, since 𝒬_R is the more generic non-trivial (stationary and axisymmetric) quadratic invariant that can be built in Kerr spacetime. We therefore set 𝒬^(2)=𝒬_R+ 𝒬^quad where 𝒬_R≜ K_μνp^μ p^ν+L_μνρS^μνp^ρ, 𝒬^quad≜ M_αβγδS^αβS^γδ. Recall that Rüdiger invariant 𝒬_R is defined by K_μν =YμλY_νλ, L_αβγ=2/3∇_[αK_β]γ+4/3ϵ_αβγδ∇^δ𝒵, with Y_μν Kerr Killing-Yano tensor Eq. (<ref>). Moreover, the tensor M_αβγδ admits the same algebraic symmetries than the Riemann tensor M_αβγδ=M_[αβ]γδ=M_αβ[γδ]=M_γδαβ. The conservation conditions at zeroth and first order in 𝒮 are left unchanged by the presence of quadrupolar terms in the MPTD equations, and still give rise to Eqs. (<ref>) and (<ref>). The presence of quadrupole terms in the MPD equations (<ref>) will only appear at quadratic order in 𝒮. Actually, we are left with only one constraint, which is of grading [2,3]. The derivation of this quadratic constraint is too long to be provided in the main text and can be found instead in Appendix <ref>. We have now demonstrated the following proposition: Any tensor N_αβγδ possessing the same algebraic symmetries as the Riemann tensor and satisfying the constraint equation [4∇_μ N_ανβρ-2κ∇_[αℳ^(1)_|μ|ν]βρ +κ(g_αμY_λν-g_μνY_λα)ξ_κ ^*Rλκβρ +(2κ Y_αμξ_λ+(2-κ)(Y_λμξ_α+Y_αλξ_μ)+3κg_αμ∇_λ𝒵) ^*Rλνβρ -3κ g_μν∇_λ𝒵 ^*Rλαβρ+(3κ-2)∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ!=𝒪( 𝒮^3), where[This notation ℳ^(1)_αβγδ will become clearer later on.] ℳ^(1)_αβγδ ≜ K_αλRλβγδ gives rise to a quantity 𝒬^(2)=𝒬_R+M_αβγδS^αβS^γδ, M_αβγδ≜ ^*N^*_αβγδ. which is conserved up to second order in the spin parameter for the MPD equations with spin-induced quadrupole (<ref>), i.e. 𝒬̇^(2)=𝒪(𝒮^3). Our next goal with be to find a way to disentangle the κ=1 and the κ≠ 1 problems. Without loss of generality, we set N_αβγδ=≜ N^BH_αβγδ+(κ-1) N^NS_αβγδ, Because κ is a priori arbitrary, the constraint (<ref>) turns out to be equivalent to the two independent equations [4∇_μ N^BH_ανβρ-2∇_[αℳ^(1)_|μ|ν]βρ +(g_αμY_λν-g_μνY_λα)ξ_κ ^*Rλκβρ +(2 Y_αμξ_λ+(Y_λμξ_α+Y_αλξ_μ)+3g_αμ∇_λ𝒵) ^*Rλνβρ -3 g_μν∇_λ𝒵 ^*Rλαβρ+∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ =𝒪( 𝒮^3) and [4∇_μ N^NS_ανβρ-2∇_[αℳ^(1)_|μ|ν]βρ +(g_αμY_λν-g_μνY_λα)ξ_κ ^*Rλκβρ +(2 Y_αμξ_λ-(Y_λμξ_α+Y_αλξ_μ)+3g_αμ∇_λ𝒵) ^*Rλνβρ -3 g_μν∇_λ𝒵 ^*Rλαβρ+3∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ=𝒪( 𝒮^3). In the continuation, we will refer to these two problems are respectively the “black hole problem” (κ=1) and the “neutron star problem” (κ≠ 1). Their resolutions are independent and will be addressed separately. Notice that the overall quasi-conserved quantity is given by 𝒬^(2)=𝒬_R+𝒬_BH+(κ-1) 𝒬_NS. The contributions 𝒬_BH and 𝒬_NS can be directly computed from the corresponding N_αβγδ tensor through Eq. (<ref>). § KERR COVARIANT FORMALISM: GENERALITIES In this section, we will show that the very structure of Kerr spacetime allows us to reduce the differential constraint equations (<ref>)-(<ref>)-(<ref>) to purely algebraic relations. It is then possible to find a unique non-trivial solution to the black hole constraint (<ref>), as will be demonstrated in Section <ref>. It also enables to provide an algebraic way for solving the κ≠ 1 linear and quadratic problems (i.e. Eq. (<ref>) and (<ref>), respectively), as will be discussed in Section <ref>. §.§ Covariant building blocks for Kerr In Kerr spacetime, the constraint equations (<ref>)-(<ref>)-(<ref>) can be fully expressed in terms of the basic tensors that live on the manifold (that is the metric g_μν, the Levi-Civita tensor ϵ_μνρσ and the Kronecker symbol δ^μ_ν) and of three additional tensorial structures that we will refer to as Kerr's covariant building blocks: the timelike Killing vector field ξ^μ, the complex scalar ℛ≜ r+iacosθ and the 2-form N_αβ≜ -i G_αβμνl^μ n^ν. Here, l^μ and n^ν are the two principal null directions of Kerr given in Eq. (<ref>) and Gαβγδ is (four times) the projector Gαβγδ≜ 2δ^[γ_αδ^δ]_β-iϵαβγδ. Notice that we have the property N_αβ=2/ξ^2 ( ∇_[αℛξ_β]^*+i∇_[αℛξ_β]). The Killing-Yano and Riemann tensors can be written algebraically in terms of these objects: Y_αβ=-1/2ℛ N_αβ+c.c., R_αβγδ=M(3N_αβN_γδ-G_αβγδ/ℛ^3). Moreover, they obey the following closed differential relations, i∇_αℛ= N_αβξ^β, i∇_γ(ℛ N_αβ)=G_αβγδξ^δ, i∇_αξ_β=-M/2(N_αβ/ℛ^2-N̅_αβ/ℛ̅^2). All the derivatives appearing in the constraints can consequently be expressed in terms of purely algebraic relations between the covariant building blocks. §.§ Some identities Let us first derive some useful identities. Many of them can be found in <cit.>. We have the algebraic identities N_αβNβγ=-g_αγ, N_αβN̅^αβ=0. Notice that this first relation yields N_αβN^αβ=4. Both N_αβ and Gαβγδ are self-dual tensors: N^*_αβ=i N_αβ, ^*Gαβγδ=G^*αβγδ=i Gαβγδ. This leads to the relations ^*R_αβγδ=R^*_αβγδ=-M(3N_αβN_γδ-G_αβγδ/ℛ^3), N̅^*_αβ=-iN̅_αβ. Given the identities just derived, the only non-trivial contraction of the 2-form that can be written is h_μν≜ NμαN̅_να. It is a real, symmetric and traceless tensor: h_μν=h_(μν)=h̅_μν, h^μ_μ=0. Using the previous identities, one shows that 𝒵=-1/2(ℛ^2). This yields ∇_α𝒵=-(ℛξ^λ N_λα). The Killing tensor can be written as K_μν=1/2((ℛ^2)g_μν+ℛ^2h_μν). Its trace is simply K =2(ℛ^2). Other useful identities include N_λκGλκβρ=4N_βρ, N_λκG̅λκβρ=N̅_λκGλκβρ=0, N̅_λκG̅λκβρ=4N̅_βρ, h_αλNλβ=N̅_αβ. §.§ Basis of contractions Our goal is now to rewrite Eqs. (<ref>)-(<ref>)-(<ref>) as scalar (that is, fully-contracted) equations involving only contractions between the Kerr covariant building blocks and the dynamical variables s^α and p̂^α. We define 𝒮^2 ≜ s_α s^α, 𝒫^2≜ -p̂_αp̂^α, 𝒜≜ s_αp̂^α. We will naturally set 𝒫^2=1 at the end of the computation, but we find useful to keep this quantity explicit in the intermediate algebra. Since 𝒜 is unphysical, it will disappear from any physical expression but it will appear in intermediate computations. We further define the following quantities at least linear in either p̂^μ or s^μ, A ≜ N_λμξ^λp̂^μ, B≜ N_αμs^αp̂^μ, C≜ N_λαξ^λ s^α, D≜ h_λαξ^λ s^α, E ≜ -ξ_αp̂^α, E_s≜-ξ_α s^α, F≜ h_λμξ^λp̂^μ, G≜ h_αμs^αp̂^μ, H ≜ h_μνp̂^μp̂^ν, I≜ h_αβ s^α s^β. We further define: J ≜ξ^α (h_αβ+g_αβ)ξ^β.=2a^2sin^2θ/r^2+a^2 cos^2θ Notice that we can now use J as a Kerr covariant substitute for a. Because of the algebraic identities derived above, these scalars form a spanning set of scalars built from contractions among the Kerr covariant building blocks. Any higher order contraction between building blocks will reduce to a product of the ones provided in the above list with coefficients that may depend upon M, a and ℛ. The quantities A,B,C are complex while the others are real. Notice that we don't have to include ξ^2 in our basis of building blocks, since ξ^2=-1+2M(ℛ^-1). It can consequently be written in terms of the other quantities. In what follows, we will always consider quantities built from (<ref>), the complex scalar ℛ, the mass M and J. §.§ A ℤ_2 grading It is possible to further restrict the number of combinations of the previously defined building blocks appearing in the equations. In order to achieve this goal, let us define a ℤ_2 grading {·} as follows. We note that the determining equations for the covariant building blocks for Kerr from Eq. (<ref>) to Eq. (<ref>) are invariant under the following ℤ_2 grading: {g_αβ} ={M}={x^μ}={∇_μ}={ℛ} ={𝒵}={ G_αβγδ}={h_μν}={K_μν}=+1 {N_αβ} ={ξ^α}={Y_αβ}=-1. Further assigning {s^μ}={p^μ}=+1, we deduce that {A} ={C}={G}={H}={I}=+1, {B} ={D}={E}={E_s}={F}=-1. Since the constraints (<ref>), (<ref>) have grading +1, the odd quantities will have to be combined in pairs in order to build a solution to the constraint. We define the (s,p)^± grading of an expression as the s number of s^α and p number of p̂^α factors in the expression with the sign ± indicating the ℤ_2 grading. The complete list of the lowest s+p=1 and s+p=2 grading spanning elements is given in Table <ref>. The list of spanning elements of grading (s,p)^± for s+p ≥ 3 is obtained iteratively by direct product of the lower order basis elements. For example, the independent real terms of grading (2,1)^+ are obtained from (2,0)^+ × (0,1)^+, (2,0)^- × (0,1)^-, (1,1)^+ × (1,0)^+ and (1,1)^- × (1,0)^- with duplicated elements suppressed. Of prime importance for solving the linear and quadratic constraint equations will be the elements of gradings (2,2)^+ and (2,3)^+. Their respective spanning sets contain 118 and 284 elements, which are explicitly listed in the Mathematica notebooks appended to this chapter, available at <https://github.com/addruart/generalizedCarterConstant>. §.§ The α-ω basis We note that the covariant building blocks all depend on ℛ through real and imaginary parts of expressions containing fractions of ℛ and ℛ̅. We find therefore natural to define the objects (n,p ∈ℤ and K=1,A,B,C,…,J or any combination of these objects): α_K^(n,p)≜(Kℛ̅^n/ℛ^p), ω_K^(n,p)≜(Kℛ̅^n/ℛ^p). They satisfy the following properties: α_iK^(n,p) =-ω_K^(n,p), ω_iK^(n,p)=α_K^(n,p), α_K̅^(n,p) =α_K^(-p,-n), ω_K̅^(n,p)=-ω_K^(-p,-n). Moreover, one has ℛ^2α_K^(n,p)=α_K^(n+1,p-1), (ℛ^2)α_K^(n,p)=1/2[α_K^(n,p-2)+α_K^(n+2,p)], α_ℛ^k K^(n,p)=α_K^(n,p-k), α_ℛ̅^kK^(n,p)=α_K^(n+k,p). The same properties hold with ω instead of α. Finally, α and ω are linear in their subscript argument with respect to real-valued functions. Let us denote ℓ^μ=p̂^μ or s^μ. Then, for any T≜ T_μ_1…μ_kℓ^μ_1…ℓ^μ_k, we define the operator ∇̂ as ∇̂T≜p̂^λ∇_λ(T_μ_1…μ_k)ℓ^μ_1…ℓ^μ_k. Making use of the identities ∇̂ℛ^n=inℛ^n-1A, ∇̂ℛ̅=-inℛ̅^n-1A̅, we get the following relations: ∇̂α_K^(n,p) =α_∇̂K^(n,p)+nω_KA̅^(n-1,p)+pω_KA^(n,p+1), ∇̂ω_K^(n,p) =ω_∇̂K^(n,p)-nα_KA̅^(n-1,p)-pα_KA^(n,p+1). We use dimensions such that G=c=1. Given the large amount of definitions just provided, we find useful to summarize the mass dimensions [·] of all quantities in order to keep track of the powers of the mass M that can arise. We have the following mass dimensions [∇_μ]=-1, [g_μν]=[ξ^α]=[N_αβ]=[G_αβμν]=0, [x^μ]=[M]=[ℛ]=[Y_αβ]=1 and [K_αβ]=[K]=2. We deduce [X]=0, [α_X^(n,p)]=[ω_X^(n,p)]=n-p, where X is any function of the set A,B,C,D,E,E_s,F,G,H,I,J defined in Eqs. (<ref>). § KERR COVARIANT FORMALISM: REDUCTION OF THE CONSTRAINTS §.§ Linear constraint with X^μ=0 We will now reduce the linear constraint equations in the case where Y_μν is the Kerr Killing-Yano tensor. Using the explicit form of ℒ_μν as defined in (<ref>) and expressing the quantity ℒ^*_μν Y^μν in terms of covariant building blocks we find after evaluation ℒ^*_μν Y^μν=0, and therefore the [2,2] constraint is automatically fulfilled. The [2,4] constraint can be rewritten -μ Y_αβp^[αD^β]*λp̂_λ+Y_αβp^[αℒ^β]*λp̂_λ=𝒪(𝒮^3). A direct computation shows that μ Y_αβp^[αD^β]*λp̂_λ =-3M(𝒜 H+𝒫^2 G)ω_B^(1,3), Y_αβp^[αℒ^β]*λp̂_λ =3κ M(𝒜 H+𝒫^2 G)ω_B^(1,3). Using these identities and defining κ≜ 1+δκ, the [2,4] constraint takes the very simple form -3Mδκ(𝒜H+𝒫^2G)ω_B^(1,3)=𝒪(𝒮^3). It is automatically fulfilled for the test body being a Kerr black hole, because δκ=0 in this case. However, if the test body is a neutron star, δκ≠ 0 and the [2,4] constraint is not obeyed anymore. Therefore, 𝒬_Y is not anymore a constant of motion at second order in the spin in the NS case. A way to enable the [2,4] constraint to be solvable in the neutron star case is to supplement the Ansatz for the conserved quantity with a term 𝒬^(1)_NS=δκ M_αβμγδS^αβS^γδp^μ. The conservation equation will then acquire a correction given by 𝒬̇^(1)_NS =δκ v^λ∇_λ(M_αβμγδS^αβS^γδp^μ) =δκp̂^λ∇_λ M_αβμγδS^αβS^γδp^μ+𝒪(𝒮^3) =4δκp̂^λ∇_λ N_αβμγδs^αp̂^β s^γp̂^δ p^μ+𝒪(𝒮^3) where N_αβμγδ=^⋆ M_αβμγδ^⋆. In our scalar notation, it corresponds to supplement the [2,4] constraint with a term 4δκ∇̂N, with N being of grading [2,3]. The constraint to be solved then takes the simple form ∇̂N=3M/4(𝒜 H+𝒫^2 G)ω_B^(1,3). It is useful to summarize the discussion by the two following statements: Rüdiger's linear invariant 𝒬_Y=Y_αβ^* S^αβ is still conserved for the MPTD equations at second order in the spin magnitude for spin-induced quadrupoles, i.e. 𝒬̇_Y=𝒪(𝒮^3) provided that δκ=0, i.e. if the test body possesses the multipole structure of a black hole. Any tensor N_αβμγδ possessing the algebraic symmetries N_αβμγδ=N_γδμαβ=N_[αβ]μγδ=N_αβμ[γδ] and satisfying the constraint equation ∇̂N=3M/4(𝒜 H+𝒫^2 G)ω_B^(1,3) will give rise to a quantity 𝒬^(1)=𝒬_Y+δκ M_αβμγδS^αβS^γδp^μ, M_αβμγδ= ^*N^*_αβμγδ which is conserved up to second order in the spin parameter for the spin-induced quadrupole MPTD equations (<ref>), i.e. 𝒬̇^(1)=𝒪(𝒮^3), regardless to the value taken by δκ. §.§ Quadratic constraint §.§.§ Some identities Before going further on, it is useful to notice that all the covariant building blocks combinations that will appear in our equations will not be linearly independent. Actually, a direct computation shows that 2B^2+2𝒜G+𝒫^2I-𝒮^2H=0, (𝒜 H+𝒫^2G)ω^(1,3)_C=(𝒜G+𝒫^2I)ω^(1,3)_A-(𝒜 F+𝒫^2 D)ω^(1,3)_B +(𝒜^2+𝒫^2𝒮^2)ω^(1,3)_A̅+(𝒜E+𝒫^2E_s)ω^(1,3)_B̅, ω^(1,3)_A̅B^2=-B^2ω^(1,3)_A-(𝒜F-EG+E_sH+𝒫^2D)ω_B^(1,3). Moreover, let us mention the identities ω^(0,k)_Kα_L^(n,p)=1/2[ω_KL^(n,p+k)-ω_K̅L^(n-k,p)], (ℛ^2)(K/ℛ^4)=1/2(ω^(0,2)_K+ω^(2,4)_K), ℛ^2(K/ℛ^4)=ω^(1,3)_K, which will be useful in the forthocoming computations. §.§.§ Reducing the ℳ^(1) contribution Our goal is here to compute the contribution DM≜ 2∇_[α|ℳ^(1)_μ|ν]βρs^α s^βp̂^μp̂^νp̂^ρ in some details, as a proof of principle of the computations to follow, which will not be developed in full details. Noticing the identity ∇_μ K_αβ=2ϵ_λρμ(αYλβ)ξ^ρ, we get ∇_μℳ^(1)_ανβρ =K_αλ∇_μ Rλνβρ-∇_ν𝒵 R^*_μαβρ+(Y_μνξ_λ+g_μν∇_λ𝒵+Y_λμξ_ν)R*λαβρ -(2ξ_ν Y_λα-2ξ_λ Y_να+g_αν∇_λ𝒵)R*λμβρ -(2g_μνY_κα-g_αν Y_κμ)ξ_λ R*λκβρ. Making use of Eq. (<ref>), one can show that DM =[2K_μλ∇_[αRλν]βρ-∇_ν𝒵 R^*_αμβρ-(2ξ_ν Y_λμ+g_μν∇_λ𝒵)R*λαβρ +(Y_λαξ_μ+ξ_α Y_λμ+g_μα∇_λ𝒵)R*λνβρ -(g_μαY_κν-g_μνY_κα)ξ_λ R*λκβρ]s^α s^βp̂^μp̂^νp̂^ρ. Using the various identities derived above, the relations of Appendix <ref> and performing some simple algebra, one can express this contribution in terms of linearly independent quantities as DM =M/4(𝒜^2+𝒫^2𝒮^2)(5ω^(0,2)_A+4ω^(1,3)_A̅+3 ω^(2,4)_A) +M/2(𝒜E+𝒫^2E_s)(ω^(0,2)_B-ω^(1,3)_B̅-3ω^(2,4)_B) -3M/2(2𝒜 F-EG+E_sH+2𝒫^2D)ω^(1,3)_B+9M/4ω^(0,2)_AB^2+15M/4ω^(2,4)_AB^2. §.§.§ The black hole constraint equation Making use of the notations introduced above, the constraint equation (<ref>) can be written as 4∇̂N^BH+DM-Υ=𝒪(𝒮^3), where Υ ≜[(g_αμY_λν-g_μνY_λα)ξ_κ ^*Rλκβρ+(2 Y_αμξ_λ+(Y_λμξ_α+Y_αλξ_μ)+3g_αμ∇_λ𝒵) ^*Rλνβρ -3 g_μν∇_λ𝒵 ^*Rλαβρ+∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ. Using the scalar basis introduced above and the identities (<ref>) yields Υ =M/2(𝒜^2+𝒫^2𝒮^2)[3ω^(0,2)_A+ω^(1,3)_A̅]+M/2(𝒜E+𝒫^2E_s)[8ω^(0,2)_B-3ω^(1,3)_B̅] +3M/2ω^(0,2)_AB^2+9M/2B^2ω^(1,3)_A-3M/2(𝒜F+𝒫^2D)ω^(1,3)_B. In summary, Eq. (<ref>) can be written as ∇̂N^BH=Υ_BH, with the source term Υ_BH=Υ-DM/4. §.§.§ The neutron star constraint equation Repeating the very same procedure, the neutron star constraint (<ref>) reduces to the scalar-like equation ∇̂N^NS=Υ_NS with source Υ_NS =-3M/16(𝒜^2+𝒫^2𝒮^2)(ω_A^(0,2)+ω_A^(2,4)+2ω_A̅^(1,3)) +3M/8(𝒜E+𝒫^2E_s)(ω_B^(0,2)+ω_B^(2,4))-15M/16(ω_AB^2^(0,2)+ω_AB^2^(2,4)) -15M/8ω_A̅B^2^(1,3)-3/4M(𝒜F+𝒫^2D)ω_B^(1,3). § SOLUTION FOR THE QUADRATIC INVARIANT IN THE BLACK HOLE CASE We will now try to find a quadratic conserved quantity for the δκ=0 case. This corresponds to find a solution to the black hole constraint equation (<ref>). In order to reach this goal, we will postulate an Ansatz for the fully-contracted quantity N appearing in the left-hand side of Eq. (<ref>) and then use the covariant building blocks formulation to constrain the Ansatz coefficients. Notice that we proceed here by postulating an Ansatz because we known from the start what the solution will look like; it allows to present in only a few pages a fully analytical derivation of the conserved quantity. However, if it has not been the case, we could have proceeded from scratch, by writing explicitly all the terms possessing the good grading to appear in the conserved quantity, and then fixing the coefficients by numerical evaluation. This procedure will be applied in the neutron star case, see next section for more details. §.§ The Ansatz Let us consider the following Ansatz N^BH_αβγδ≜∑_A=1^4 Λ_A N_αβγδ^(A) where Λ_A are arbitrary coefficients and where N_αβγδ^(A)≜ ^*ℳ^*(A)_αβγδ. The quantity ℳ^(1)_αβγδ has been defined in Eq. (<ref>), and we introduce ℳ^(2)_αβγδ ≜ YλαYσγR_λβσδ, ℳ^(3)_αβγδ≜ g_αγξ_βξ_δ, ℳ^(4)_αβγδ≜ g_αγg_βδξ^2. Using the identities derived in Appendix <ref>, one can show that the directional derivatives of the N^(A) are given by ∇̂N^(1) =-M/4(𝒜^2+𝒫^2𝒮^2)(ω^(0,2)_A+2ω^(1,3)_A̅+3ω^(2,4)_A) +M/2(𝒜E+𝒫^2E_s)(5ω^(0,2)_B-2ω^(1,3)_B̅+3ω^(2,4)_B) +9M/2B^2ω^(1,3)_A +3M/2(𝒜F-EG+E_sH+𝒫^2D)ω^(1,3)_B-9M/4ω^(0,2)_AB^2-15M/4ω^(2,4)_AB^2, ∇̂N^(2) =-M/4(𝒜^2+𝒫^2𝒮^2)ω^(0,2)_A+M/2(𝒜 E+𝒫^2 E_s)ω^(0,2)_B-3M/4ω^(0,2)_AB^2, ∇̂N^(3) =M/2[(𝒮^2𝒫^2+𝒜^2)ω^(0,2)_A+(𝒜 E+𝒫^2 E_s)ω^(0,2)_B], ∇̂N^(4) =M(𝒮^2𝒫^2+𝒜^2)ω^(0,2)_A. §.§ Solution to the constraint We will now look for a solution to the black hole constraint equation (<ref>) using the Ansatz (<ref>), i.e. we are seeking for specific values of the parameters Λ_A such that Eq. (<ref>) is fulfilled. More explicitly, one therefore requires ∑_A=1^4Λ_A DN^(A)-Υ_BH!=0. The left-hand side of this equation takes the form of a first order polynomial, homogeneous in the ten linearly independent elements (as it can be shown through a direct computation) ω^(0,2)_A, ω^(0,2)_B, ω^(0,2)_AB^2, ω^(1,3)_A, ω^(1,3)_B, ω^(1,3)_A̅, ω^(1,3)_B̅, ω^(2,4)_A, ω^(2,4)_B, ω^(2,4)_AB^2. Because all the combinations of these elements implied in the constraint equation are linearly independent, all the coefficients appearing in front of these expressions should vanish independently. All the terms do not appear in all the contributions, as depicted in Table <ref>. In order to fix the values of the Ansatz coefficients, let us proceed along the following sequence: * ω^(1,3)_A term: this contribution reads 3M(6Λ_1-3/2)B^2ω^(1,3)_A. One therefore requires Λ_1=1/4 . * ω^(1,3)_A̅, ω^(1,3)_B̅, ω^(1,3)_B, ω^(2,4)_A, ω^(2,4)_B and ω^(2,4)_AB^2 terms: their coefficients consistently vanish when (<ref>) is fulfilled. * ω^(0,2)_AB^2 term: this contribution reads 3M(-3Λ_1-Λ_2+1/4)ω^(0,2)_AB^2. Using (<ref>), this yields Λ_2=-1/2. * ω^(1,3)_A̅, ω^(1,3)_B̅ and ω^(1,3)_B terms: their coefficients consistently vanish when Eqs. (<ref>) and (<ref>) are fulfilled. * ω^(0,2)_B term: this contribution reads M(𝒜E+𝒫^2 E_s)(10Λ_1+2Λ_2+2Λ_3-7/2)ω^(0,2)_B. Using (<ref>) and (<ref>), this yields Λ_3=1. * ω^(0,2)_A term: this contribution reads M(𝒜^2+𝒫^2 𝒮^2)(-Λ_1-Λ_2+2Λ_3+4Λ_4-1/4)ω^(0,2)_A. Using Eqs. (<ref>), (<ref>) and (<ref>), this finally yields Λ_4=-1/2. In conclusion, the Ansatz (<ref>) gives a coherent solution to the constraint equation (<ref>) only if Λ_1=1/4, Λ_2=-1/2, Λ_3=1, Λ_4=-1/2. More explicitly, it corresponds to set M_αβγδ=g_αγ(ξ_βξ_δ-1/2g_βδξ^2)-1/2Yαλ(YγκR_λβκδ+1/2YλκR_κβγδ). §.§ Uniqueness of the solution We now address the uniqueness to the non-trivial solution (<ref>) to the constraint (<ref>) derived above. If one adds an additional piece to our Ansatz, it will satisfy an homogeneous equation since all source terms have been cancelled by the Ansatz. Demonstrating uniqueness of the non-trivial solution (<ref>) therefore amounts to prove that ∇_(μN_ν|(αβ)|ρ)=0 does not admit any non-trivial solution in Kerr spacetime. We call such a tensor field a Young tableau 2,2 Killing tensor. A trivial Killing tensor is defined as a Killing tensor which is given by a cross-product. Such a trivial Killing tensor would add to the quadratic conserved quantity a product of conserved quantities that are already defined. There is only trivial Killing tensor of symmetry type 2,2, namely N_μναβ=Y_μνY_αβ which correspond to add the product (𝒬_Y)^2 to the quadratic conserved quantity. We checked explicitly by solving the partial differential equations analytically using a Mathematica notebook that no non-trivial such tensor exists in a perturbative series expansion in a around a=0 assuming that it only depends upon r and θ. §.§ Summary of the results Let us summarize the results we have obtained about the quadratic invariants. Our discussion can be compactified in the two following propositions: The quadratic invariant 𝒬^(2)_BH =𝒬_R+[g_αγ(ξ_βξ_δ-1/2g_βδξ^2) -1/2Yαλ(YγκR_λβκδ+1/2YλκR_κβγδ)]S^αβS^γδ Generalized quadratic invariant is conserved for the MPTD equations at second order in the spin magnitude for spin-induced quadrupole, i.e. 𝒬̇^(2)_BH=𝒪(𝒮^3) provided that δκ=0, i.e. if the test body possesses the multipole structure of a black hole. Here, 𝒬_R= K_μνp^μ p^ν+L_μνρS^μνp^ρ with L_μνρ given in Eq. (<ref>) is Rüdiger's quadratic invariant <cit.>. Any tensor N_αβγδ possessing the same algebraic symmetries than the Riemann tensor and satisfying the constraint equation ∇̂N=Υ_NS, where the source term Υ_NS is given in Eq. (<ref>) will give rise to a quantity 𝒬^(2)=𝒬^(2)_BH+δκ M_αβγδS^αβS^γδ, M_αβγδ= ^*N^*_αβγδ which is conserved up to second order in the spin parameter for the spin-induced quadrupole MPTD equations (<ref>), i.e. 𝒬̇^(2)=𝒪(𝒮^3), regardless to the value taken by δκ. We finally notice that Υ_NS_a=0=0 in the Schwarzschild case by explicit evaluation of (<ref>). A direct consequence is that the deformation of Rüdiger's quadratic invariant constructed in the black hole case is still quasi-conserved for arbitrary κ: In Schwarzschild spacetime (a=0), the deformation of Rüdiger's quadratic invariant 𝒬_BH^(2) given in Eq. (<ref>) is still conserved for the MPTD equations up to 𝒪(𝒮^3) corrections for arbitrary (κ∈ℝ) spin-induced quadrupole. Notice that the conservation does not hold for Rüdiger's linear invariant 𝒬_Y. The Kerr case (a ≠ 0) will be further discussed in Section <ref>. § NEUTRON STAR CASE AROUND KERR: A NO-GO RESULT We summarize in Table <ref> the three constraint equations discussed previously. They all take the form ∇̂N=Υ, with Υ being of grading [s,p]^+. It implies that N should be of grading [s,p-1]^+. Let K_𝔞 be a basis of linearly independent and dimensionless functions build from the (manifestly real) functions 𝒜, 𝒫^2,𝒮^2, A, A,…, C, C,D,…,I. Given that N is dimensionless and given the structure of the source terms Υ, we propose the following Ansatz, N=∑_𝔞∑_(k,l)∈ℤ^2K_𝔞M^l(C_𝔞^(k,l)f_𝔞^(k,l)(J)α_1^(k,k+l)+D_𝔞^(k,l)g^(k,l)_𝔞(J)ω_1^(k,k+l)). Here, C_𝔞^k,l and D_𝔞^k,l are numerical coefficients and f_𝔞^k,l(J) and g_𝔞^k,l(J) are smooth functions of J. We can work with dimensionless quantities by first introducing the dimensionless variables r̃=r/M, ã=a/M. We notice that the K_𝔞's are left unchanged and do not depend anymore on M, whereas ℛ≜ Mℛ̃, with ℛ̃≜r̃+iã. This yields α^(n,p)_1=M^n-p(ℛ̅̃̅/ℛ̃)≜ M^n-pα̃^(n,p), ω^(n,p)_1=M^n-p(ℛ̅̃̅/ℛ̃)≜ M^n-pω̃^(n,p). Each derivative of the term present in the Ansatz scales as M^-1 times a manifestly dimensionless quantity. All the source terms appearing earlier can be written as Υ=M^-1Υ̃, with Υ̃ being an dimensionless quantity. This implies that Eq. (<ref>) reduces to N=∑_𝔞∑_(k,l)∈ℤ^2 K_𝔞(C_𝔞^k,lf_𝔞^k,l(J)α̃^(k,k+l)+D_𝔞^k,lg^k,l_𝔞(J)ω̃^(k,k+l)), which contain only terms that are explicitly independent of M. We can further define ∇̃= M ∇̂ the dimensionless derivative operator and the constraints take the dimensionless form ∇̃N = Υ̃. §.§ Perturbative expansion in a of the constraint equations Instead of addressing the non-linear problem in a we will perform a perturbative series in a. For any smooth function f of a, we define (f)_n≜[n]fa_a=0. The constraint equation then becomes an infinite hierarchy of equations (∇̃ N)_n=Υ̃_n, ∀ n≥ 0. Let us describe the n=0 and n=1 equations. Since J∝ a^2, the functions f_𝔞^k,l(J) and g_𝔞^k,l(J) do not contribute and can be set to one without loss of generality. §.§.§ n=0 equation Noticing the identities ω̃^(k,k+l)_a=0=α̃^(k,k+l+1)_A_a=0=α̃^(k-1,k+l)_A̅_a=0=0, α̃^(k,k+l)_a=0=r̃^-l, ω̃^(k,k+l+1)_A_a=0=-p_r r̃^-(l+1), ω̃^(k-1,k+l)_A̅_a=0=p_r r̃^-(l+1) and making use of Eq. (<ref>), the n=0 constraint becomes ∑_𝔞∑_(k,l)∈ℤ^2 C_𝔞^(k,l)[(∇̃K_𝔞)_0r̃^-l-l (K_𝔞)_0 p_r r̃^-(l+1)]=(Υ̃)_0. It does not depend on the terms involving ω's contributions. Moreover, denoting C^(l)_𝔞≜∑_k∈ℤC^(k,l)_𝔞, this equation can be further simplified to ∑_𝔞∑_l∈ℤ C^(l)_𝔞[(∇̃K_𝔞)_0-l (K_𝔞)_0 p_r r̃^-1]r̃^-l=(Υ̃)_0. §.§.§ n=1 equation Following an identical procedure and denoting D_𝔞^(l)≜∑_k∈ℤ(2k+l)D_𝔞^(k,l), the n=1 constraint equation can be shown to take the form ∑_𝔞∑_l∈ℤ{ C_𝔞^(l)[(∇̃K_𝔞)_1-(K_𝔞)_1lp_rr̃^-1]r̃^-l -D_𝔞^(l)[(∇̃K_𝔞)_0x+(K_𝔞)_0(p_θ-(l+1)p_rxr̃^-1)]r̃^-(l+1)}=(Υ̃)_1. §.§.§ Numerical evaluation Eqs. (<ref>) and (<ref>) have been numerically evaluated using Mathematica in order to try to fix the values of the coefficients C_𝔞^(l) and D_𝔞^(l) that would enable a possible solution to the neutron star cases. We have only looked for “polynomial” solutions to these equations, i.e. solutions for which the coefficients of the ansatz are non-vanishing only over a finite interval [l_min,l_max]. Given the size of the expressions involved, the only computationally reasonable solving method available to us was the following: let us denote N the number of terms present in the left-hand side of (<ref>) (resp. (<ref>)) for a given [l_min,l_max]. Eq. (<ref>) (resp. (<ref>)) was then evaluated N+1 times at different random values of its variables and parameters, resulting into a linear system of N+1 algebraic equations in N variables (the coefficients C_𝔞^(l) and D_𝔞^(l)) that was then solved using the built-in numerical equation solver of Mathematica. This procedure has been proof-tested by reproducing the coefficients corresponding to the black hole quadratic invariant from the source term Υ_BH. It has then been used to attempt to find a solution to both the linear and the quadratic neutron star problems, with [l_min,l_max]=[-10,100]. No solution has been found, discarding a priori the existence of polynomial-type solutions. In the appended Mathematica notebooks[<https://github.com/addruart/generalizedCarterConstant>], the interval of l is reduced to [l_min,l_max]=[0,5] in order to reduce the computational time for the interested reader. The four notebooks related to this section are: * : check of the numerical evaluation of the n=0 equation: reproduction of the black hole quadratic invariant from Υ_BH; * : attempt of finding a polynomial solution to the n=0 equation for the neutron star linear problem (source term Υ_lin); * : check of the numerical evaluation of the n=1 equation: reproduction of the black hole quadratic invariant from Υ_BH; * : attempt of finding a polynomial solution to the n=1 equation for the neutron star quadratic problem (source term Υ_NS). § SUMMARY We have completed our exploration of the conserved quantities for the MPTD equations (that is, the MPD equations endowed with the TD spin supplementary condition) in Kerr spacetime. Our discussion is graphically summarized in Fig. <ref>. Notice that by “conservation”, we really mean “quasi-conservation”, i.e. we only require the conservation equation to hold up the some given order in 𝒮. The magnitude 𝒮 of the spin is always conserved. At linear order in the spin 𝒮, the dynamical mass μ of the body is also conserved. Moreover, the three additional quantities conserved along geodesic motion (namely ℰ_0, ℒ_0 and 𝒦_0) can be deformed to construct quantities ℰ, ℒ and 𝒬_R which are still conserved. They are respectively given by Eqs. (<ref>) and (<ref>). Moreover, a new invariant 𝒬_Y appears, homogeneously linear in 𝒮. It is given by Eq. (<ref>). For historical reasons, 𝒬_Y and 𝒬_R are respectively referred to as the linear and the quadratic Rüdiger invariants. At second order in the spin magnitude and for spin-induced quadrupole with black hole-type coupling (κ=1), the mass μ is no more conserved, but one can still define a mass-like quantity μ̃ which is still conserved, see Eq. (<ref>). Our others finding are as follows: (i) ℰ and ℒ are still constants of motion. This property can be shown to hold at any order of the multipole expansion of the equations of motion <cit.>; (ii) the linear Rüdiger invariant 𝒬_Y is still conserved and (iii) a deformation of Rüdiger's quadratic invariant denoted 𝒬_BH^(2) exists such that the deformed Rüdiger's quadratic invariant is also quasi-conserved. It is given by Eq. (<ref>). Finally, (iv) the conservation of the deformed quadratic invariant can only be extended to arbitrary coupling (κ≠ 1) around the Schwarzschild spacetime (a=0). All our attempts to find solutions to the constraint equations in the case of an arbitrary spin-induced coupling in generic Kerr spacetime have failed. Let us notice that, even if someone would succeed in solving them, the quasi-invariants obtained would not be of direct astrophysical interest. This is due to the fact that, except in the special case where the test body is itself a black hole, the spin-induced term is not the only contribution to the quadrupole. Tidal-type contributions will also arise, breaking the quasi-conservation obtained for spin-induced quadrupole only. The existence of these constants of motion has interesting consequences when we study the motion of spinning test bodies from the Hamiltonian perspective. First, as it was the case for geodesics, the existence of these constants is actually strongly related to the separability of the Hamilton-Jacobi equation for spinning bodies. Second, having in our possession a set of conserved quantities will enable us to address the status of integrability of the motion of spinning test bodies in Kerr spacetime. These topics will be the ones that will be treated in the final part of this thesis. [width=]cloud_iv.png PART: [Hamiltonian Description of Extended Test Bodies]Hamiltonian Description of Extended Test Bodies A more comprehensive understanding of some properties of test body motion in Kerr spacetime can be achieved by turning to an Hamiltonian formulation of the problem. In particular, it will enable us to discuss the integrability properties of the motion and the relation between the conserved quantities derived in Part <ref> and the solution of the associated Hamilton-Jacobi equation. As it was the case for geodesic motion (and was extensively described in Chapter <ref>), there are two main formulations of test body motion in curved spacetime. The first one is based on a 3+1 decomposition of the background spacetime and uses the coordinate time t for evolving the motion. Its associated phase space is 12-dimensional, and is spanned by the variables x^i,p_i,S^μν. Historically, this is the Hamiltonian formulation of test bodies that has been the most widely used in the literature, particularly because it is very convenient for performing post-Newtonian expansions of results obtained in the MPD framework. See e.g. <cit.> and references therein for more details. Even if this formulation is well-suited for numerous applications, it is not well adapted to some others. One of these is the study of the structure of the associated Hamilton-Jacobi equation, and the computation of the related spin-induced shifts in the orbital turning points and in the fundamental frequencies of the motion. However, this loophole can be overcome by constructing covariant Hamiltonians, whose time evolution is parametrized by the proper time of the body or by any other external time parameter. These covariant Hamiltonians will drive the evolution on a larger, 14-dimensional, phase space parametrized by (x^μ,p_μ,S^μν). The construction of such Hamiltonians valid at all orders in 𝒮 but restricted to the pole-dipole MPD equations was discussed for various spin supplementary conditions by W. Witzany, J. Steinhoff and G. Lukes-Gerakopoulos in <cit.>. They also discussed the construction of canonical coordinates on phase space. Their derivation is valid provided that an arbitrary covariant SSC of the form Eq. (<ref>) has been enforced. This part of the thesis is voluntarily a little bit more sketchy than the previous ones, and present some partial results and conjectures, leaving significant room for subsequent work to be performed. The text is structured as follows. Chapter <ref> introduces a phase space compatible with a covariant Hamiltonian description of test bodies in generic curved spacetime. Both non-symplectic and symplectic coordinates systems are described, and the associated Poisson brackets algebras are derived. Finally, spin supplementary conditions viewed as constraints between the phase space variables are discussed from the action perspective. Chapter <ref> reviews and generalizes the method of <cit.> for building covariant Hamiltonians, by including quadrupole moment but restricting to quadratic order in 𝒮, as physical coherence of the multipole expansion requires. Using this technique, we present a fully covariant Hamiltonian which generates the MPD equations endowed with spin-induced quadrupole moment under the Tulczyjew-Dixon spin supplementary condition. To our knowledge, this result does not appear in the present form in the literature. The two final chapters of the thesis are devoted to applications of the Hamiltonian formalism. Chapter <ref> reviews the status of integrability of test body motion in both Schwarzschild and Kerr spacetimes, at linear and quadratic orders in the spin magnitude 𝒮. As claimed in <cit.> and numerically corroborated by <cit.>, we argue that MPD equations in Kerr spacetime are not integrable already at linear order in 𝒮, by explicitly computing the Poisson brackets between the maximal set of linearly independent conserved quantities found in Part <ref>. Finally, Chapter <ref> reviews Witzany's analysis of Hamilton-Jacobi equation for test bodies in Kerr spacetime <cit.>. Strictly speaking, the solution of the Hamilton-Jacobi equation is not anymore separable when spin has been turned on. However, at linear order in 𝒮, the terms breaking the separability are negligible everywhere in the phase space, but near the turning points of the associated zeroth-order geodesic motion. In this so-called “swing region” where it is separable (to be precisely defined in Eq. (<ref>)), the solution takes a form very similar to the one worked out by Carter for geodesic motion (cf. Chapter <ref>). In particular, Rüdiger's deformation of the Carter constant still plays the role of the separation constant of the Hamilton-Jacobi problem. The main advantage of this Hamilton-Jacobi formulation is that it allows to compute easily the spin-induced shifts in the orbital turning points and in the fundamental frequencies of the motion, with respect to a purely geodesic trajectory <cit.>. The chapter ends by discussing a partial solution at quadratic order in 𝒮, valid in the swing region of phase space, and by conjecturing the status of separability of the Hamilton-Jacobi problem at this order. CHAPTER: SYMPLECTIC FORMULATION This chapters aims to set the stage for the Hamiltonian description of spinning test bodies. Its goal will be twofold, namely (i) discussing the phase space structure and its associated symplectic structure (Poisson brackets algebra) and (ii) understanding which are the constraints the phase space variables are subjected to. The discussion is organized as follows: Section <ref> will discuss the structure of phase space for spinning test bodies, and how an associated Poisson brackets structure can be build. Section <ref> will review two ways of constructing symplectic coordinates on the phase space. Finally, we turn to the discussion of constraints, which is the subject of Section <ref>. In what follows, we start again from the abstract form of the action obtained in Chapter <ref>: S=∫λ(p_μ v^μ+1/2S_μνΩ^μν), where p_μ and S_μν denote the momenta conjugated to v^μ and Ω^μν. § PHASE SPACE AND NON-SYMPLECTIC COORDINATES We will take the phase space ℳ of spinning test bodies to be spanned by the variables x^μ,p_μ,S^μν. It is therefore 14-dimensional, and has the simple structure ℳ≃ℝ^4×ℝ^4×ℝ^6. In this section, we aim to derive a symplectic structure on ℳ that is compatible with the Hamiltonian description of test bodies. In order to reach this aim, a guideline is to recall that one of the keypoints of Hamiltonian description is to replace the velocities q̇^𝔦 by the conjugate momenta P_𝔦≜Lq̇^𝔦. In terms of the coordinates q^𝔦 and of the momenta P_𝔧, Eq. (<ref>) implies that the Poisson brackets algebra is simply[From this point, we understand that all the Poisson brackets that are not explicitly written are vanishing.] q^𝔦P_𝔧=δ^𝔦_𝔧. In that case, we will say that q^𝔦, P_𝔦 are symplectic variables, or equivalently that the Poisson brackets algebra is in Darboux form. Our strategy for computing the Poisson brackets algebra between the usual phase space variables consists in two steps: (i) identify the variables that play the role of the coordinates q^𝔦 and of their associated velocities q̇^𝔦 in the Lagrangian formulation of test bodies introduced in Chapter <ref>. As we will see, it will allow us to find explicit expressions for the form of the canonically conjugate momenta. Subsequently, (ii) we will use our findings as well as the canonical algebra Eq. (<ref>) to obtain the standard Poisson brackets between the variables x^μ,p_μ and S^μν. §.§ Coordinates and velocities The identification of coordinates and velocities in the Lagrangian can be performed in two steps <cit.>: * Up to this point, we have written our Lagrangian action as S=∫λ L(v^μ,Ω^μν,g_μν,R_μνρσ,∇_α R_μνρσ,…). For a stationary background, the metric, the Riemann tensor and all its covariant derivatives are only functions of the spacetime coordinates x^μ. Therefore, the action above can be formally written S=∫λ L(x^μ,v^μ,Ω^μν). * Given an arbitrary background tetrad e Aμ, the worldline tetrad eAμ can be expressed as eAμ(λ)=(Λ^-1)BA(λ)eBμ(z(λ)), where the Lorentz transformation matrix[Recall that we dropped the underline in the background tetrad indices.] ΛAB has been introduced in Eq. (<ref>). As mentioned earlier, this matrix depends on six (reals) parameters. Denoting them collectively ϕ^I (I=1,…,6), the worldline tetrad can be seen as a function of the worldline position x^μ and of the Lorentz parameters ϕ≜ϕ^I, which depend both on λ: eAμ(λ)=eAμ(ϕ(λ),x(λ)). This implies that the rotation coefficients can be written as Ω^μν=e^Aμ(ϕ,x)(ϕ^IλeAνϕ^I(ϕ,x)+v^λ eAν,λ(ϕ,x))+Γ^ν_αβg^μαv^β. It follows that the rotation coefficients are only functions of x^μ, ϕ^I and its first derivative: Ω^μν=Ω^μν(x,ϕ,ϕλ). From the above discussion, our action can be written as S=∫λ L̅(x^μ,v^μ,ϕ^I,ϕ^Iλ). The notation L̅ has been introduced to make explicit the new functional dependence of the Lagrangian, which will be enlightening when computing the conjugate momenta. We also introduce the shortcut notation ϕ̇^I≜ϕ^Iλ. The configuration space of the system is spanned by the generalized coordinates 𝐪≜x^μ,ϕ^I and its action is explicitly written in terms of the coordinates 𝐪 and their associated velocities 𝐪̇: S=∫λ L̅(x^μ,v^μ,ϕ^I,ϕ̇^I,t). §.§ Conjugate momenta The next step is to write down the momenta conjugated to the variables 𝐪. As revealed by the above discussion, the Lagrangian can be seen either as a function L(x^μ,v^μ, Ω^μν,λ) or as a function L̅(x^μ,v^μ,ϕ^I,ϕ̇^I,t). These function must satisfy L(x^μ,v^μ, Ω^μν,λ)_Ω^μν=Ω^μν(x^μ,ϕ^I,ϕ̇^̇İ)=L̅(x^μ,v^μ,ϕ^I,ϕ̇^I,λ). Varying the two sides of this equation gives respectively δL̅=L̅x^μδ x^μ+P_μδ v^μ+L̅ϕ^Iδϕ^I+P_ϕ^Iδϕ̇^I with P_μ≜L̅v^μ, P_ϕ^I≜L̅ϕ̇^I. and δ L = Lx^μδ x^μ+Lv^μδ v^μ+(LΩ^μνδΩ^μν)_Ω^μν=Ω^μν(x^μ,ϕ^I,ϕ̇^̇İ) =(Lx^μ+LΩ^μνΩ^μνx^μ)δ x^μ+(Lv^μ+LΩ^μνΩ^μνv^μ)δ v^μ +LΩ^μνΩ^μνϕ^Iδϕ^I+LΩ^μνΩ^μνϕ̇^Iδϕ̇^I. Comparing the two variations and using the definition (<ref>) yields P_μ =p_μ+1/2S_μνΩ^μνv^μ, P_ϕ^I=1/2S_μνΩ^μνϕ̇^I. Using Eq. (<ref>), one can write, after some straightforward algebra P_μ =p_μ-1/2ω_μ ABS^AB, ω_μ AB≜eAλe_Bλ;μ P_ϕ ^I =1/2S_μνλ^AB_IeAμeBν, λ^AB_I≜Λ^-1CAΛ^-1^CBϕ^I. Notice that we have introduced the connection 1-forms ω_μ AB following the notation of Wald <cit.>. These quantities satisfy ω_μ AB=-ω_μ BA. In terms of the coordinates x^μ,ϕ^I and of their conjugate momenta P_μ,P_ϕ^I, the Poisson brackets algebra is simply of the Darboux form (<ref>), which reads explicitly x^μP_ν=δ^μ_ν, ϕ^IP_ϕ^J=δ^I_J. The expressions found for the canonically conjugate momenta are rather abstract, and more work will be required to end up with explicit formulae for these momenta, as well as to disentangle the independent variables. Nevertheless, this abstract formulation will already allow us to compute the Poisson algebra between the conventional phase space variables. §.§ Quasi-symplectic coordinates The Poisson algebra takes a particularly simple form if one choose the phase space coordinates to be x^μ,P_μ,S^AB, where S^AB=eAμeBνS^μν are the components of the spin tensor expressed in an arbitrary background orthonormal tetrad eAμ. For these coordinates, the only non-vanishing brackets are (see <cit.> and references therein) x^μP_ν =δ^μ_ν, S^ABS^CD =S^ACη^BD+S^BDη^AC-S^ADη^BC-S^BCη^AD. Poisson brackets for quasi symplectic coordinates Following <cit.>, the form of this algebra leads us to refer to these coordinates as quasi-symplectic coordinates: they behave as symplectic coordinates for the “position sector” of the phase space, covered by the variables x^μ and P_μ, but not for the “spin sector”, covered by S^AB. Notice however that the brackets between the tetrad spin tensor components are a realization of the 𝔰𝔬(1,3) (i.e. Lorentz) algebra, which originates from the definition of S^AB as being expressed in an orthonormal tetrad frame. The rest of this section will be devoted to the proof of the algebra Eq. (<ref>). We begin by deriving a couple of useful identities, following <cit.>. First, let us assume – without loss of generality – that the inverse of Eq. (<ref>) takes the form S^μν=eAμeBνρ^AB_I(ϕ) P_ϕ^I. Plugging this equation in Eq. (<ref>), we get P_ϕ^I(δ_IJ-1/2ρ^AB_Iλ_JABP_ϕ^J)=0, which yields ρ^AB_Iλ_JAB=2δ_IJ. Now, proceeding the other way around and plugging Eq. (<ref>) into Eq. (<ref>), we get S^αβeAμeBνe_Cαe_Dβ(η^ACη^BD-1/2ρ^AB_Iλ^CD_I)=0, which yields, since it must be valid for any antisymmetric spin tensor S^αβ, ρ_I^ABλ^CD_I=η^ACη^BD-η^ADη^BC. Finally, by differentiating directly the definition Eq. (<ref>) of λ_I^AB, we get the third identity λ^AB_Iϕ^J-λ^AB_Jϕ^I=λ^AC_IλJCB-λ^AC_JλICB. Combining the three identities (<ref>), (<ref>) and (<ref>) yields, after a bunch of algebra ρ^AB_Jρ_I^CDϕ^J-ρ^CD_Jρ_I^ABϕ^J=-ρ^AC_Iη^BD-ρ^BD_Iη^AC+ρ^AD_Iη^BC+ρ_I^BCη^AD, which can be recognized as another realization of the Lorentz algebra. We can now turn to the computation of the Poisson brackets between the components of the spin tensor. Since S^AB=ρ^AB_IP_ϕ^I, we get S^ABS^CD =-ρ^AB_Iρ^CD_Jϕ^Kϕ^KP_ϕ^IP_ϕ^J-(A↔ C,B↔ D) =-ρ^AB_Iρ^CD_Jϕ^IP_ϕ^J-(A↔ C,B↔ D) =(ρ^AC_Iη^BD+ρ^BD_Iη^AC-ρ^AD_Iη^BC-ρ_I^BCη^AD)P_ϕ^J =S^ACη^BD+S^BDη^AC-S^ADη^BC-S^BCη^AD. The second equality was obtained using the Poisson algebra Eq. (<ref>), whereas the third one has made use of the identity Eq. (<ref>). The vanishing of the others brackets between quasi-symplectic coordinates is straightforward to check, and we end up with the algebra announced in Eq. (<ref>). §.§ Non-symplectic coordinates From the quasi-symplectic Poisson algebra, we can descent to the algebra for the non-symplectic coordinates x^μ,p_μ,S^μν, which appear as the most “basic” coordinates for parametrizing the phase space, and which have been used in a wide range of works (see again <cit.> and references therein). In terms of these variables, the Poisson algebra reads x^μp_ν =δ^μ_ν, p_μp_ν =-1/2R_μναβS^αβ, S^μνp_κ =2Γ^[μ_λκS^ν]λ, S^μνS^ρσ =g^μρS^νσ-g^μσS^νρ+g^νσS^μρ-g^νρS^μσ. Poisson brackets for non-symplectic coordinates The derivation of this algebra is rather straightforward from the quasi-symplectic algebra Eq. (<ref>) and the definitions of the conjugate momenta Eq. (<ref>). It is however quite long and not particularly enlightening, and will therefore not be reproduced here. The only keypoint to mention is that the appearance of the Riemann tensor in Eq. (<ref>) comes from the possibility of writing the Riemann tensor solely in terms of the ω_μ AB, see Eqs. (3.4.20) and (3.4.21) of <cit.>. §.§ Casimir invariants From the Poisson algebra Eq. (<ref>), it is easy to prove that the two quantities 𝒮^2≜1/2S_ABS^AB, 𝒮_*^2≜1/8ϵ_ABCDS^ABS^CD have vanishing Poisson brackets with any variable x^μ, P_μ and S^AB. Their Poisson brackets with any phase space function F are therefore vanishing, 𝒮^2F=𝒮^2_*F=0. Taking F=H implies that they are conserved for the evolution of any dynamical system of Hamiltonian H evolving on ℳ endowed with the set of Poisson brackets Eq. (<ref>). § SYMPLECTIC COORDINATES We now turn to the construction of fully symplectic coordinates on the phase space. Since the quasi-symplectic coordinates are already symplectic for the position sector of the phase space, only the spin sector remains to be discussed. This sector is six dimensional, since it is parametrized by S^AB. Given the existence of the two Casimir invariants (<ref>), we are only missing two pairs of canonically conjugate coordinates for fully describing the spin sector. There exist currently two approaches for building such symplectic coordinates for the spin sector: the first one, due to W. Witzany, J. Steinhoff and G. Lukes-Gerakopoulos <cit.> amounts to parameterize in a clever way the Lorentz matrix (<ref>) encoding the spin degrees of freedom of the system. The procedure for deriving these symplectic coordinates will be discussed in details below, and is valid provided that a covariant spin supplementary condition of the form (<ref>) has been enforced. The second path has been recently introduced by P. Ramond <cit.>, and consists into explicitly solving the differential equations constraining the spin sector variables to be symplectic. He found variables which are linear combinations of Witzany's ones, but his derivation is more generic, since it is totally independent from any spin supplementary condition. §.§ Decomposition of the Lorentz matrix: Witzany et al. coordinates In this section, we will derive canonical coordinates for the spin sector of the problem following Witzany et al. <cit.>. We temporarily introduce again the underline notation for the background tetrad indices, since both background and object tetrads will be involved in the discussion. This construction restricts ourselves to non-trivial spin tensors subjected to an arbitrary covariant SSC of the form Eq. (<ref>), thus possessing one timelike degenerate direction and one spatial non-degenerate one. This restriction on the form of the spin tensor allows to tune the object tetrad so that S_AB=(0 0 0 0 0 0 𝒮 0 0 -𝒮 0 0 0 0 0 0). We will now construct canonical coordinates in a very pedestrian way: dynamical variables (ϕ_i,χ^i) will be canonical coordinates for the spin sector of the problem provided that the spin term 1/2S_μνΩ^μν present in the action can be written as 1/2S_μνΩ^μν!=ϕ_iχ̇^i with ≜λ. We will now exhibit such coordinates. First, remark that a straightforward computation yields 1/2S_μνΩ^μν=1/2S_A BΩ^A B=-1/2S_ABΛAAΛ^ABλ. Said with words, the dynamics of the spin sector of the problem can be entirely expressed in terms of the Lorentz matrices linking the body and the background tetrads. The second step is to explicit such a transformation in terms of its six Lorentz parameters. To do it in a clever way, let us notice that the spin tensor (<ref>) is invariant with respect to rotations and boosts in the z direction. Therefore, two out of the six Lorentz parameters will play the role of gauge degrees of freedom, whereas the four other will be physically relevant. To take advantage of this situation, let us decompose our generic Lorentz transformation as follows <cit.>: Λ=R(α, n_z)B(v_z, n_z)B(u, n_ψ)R(-ϑ, n_ϕ). Here, the directions of the two rightmost boost and rotation are chosen to lie in the x-y plane, n_ψ≜(sinψ cosψ 0), n_ϕ≜(sinϕ cosϕ 0). All the parameters of the Lorentz transformations α,v_z,u,ψ,θ,ϕ are considered as depending on the time parameter λ. The explicit rotation and boost matrices are respectively given by R(θ, u)= (1 0 0 0 0 cosϑ+u_x^2(1-cosϑ) u_x u_y(1-cosϑ)-u_zsinϑ u_xu_z(1-cosϑ)+u_ysinϑ 0 u_yu_x(1-cosϑ)+u_zsinϑ cosϑ+u_y^2(1-cosϑ) u_yu_z(1-cosϑ)-u_xsinϑ 0 u_zu_x(1-cosϑ)-u_ysinϑ u_zu_y(1-cosϑ)+u_xsinϑ cosϑ+u_z^2(1-cosϑ)) and B(v, u)=(γ -γ v u_x -γ v u_y -γ vu_z -γ vu_x 1+(γ-1)u_x^2 (γ-1) u_xu_y (γ-1)u_xu_z -γ vu_y (γ-1) u_yu_x 1+(γ-1) u_y^2 (γ-1)u_yu_z -γ vu_z (γ-1) u_zu_x (γ-1)u_zu_y 1+(γ-1)u_z^2) with u=(u_x,u_y,u_z) a normalized spatial vector (u_x^2+u_y^2+u_z^2=1) and γ=1/√(1-v^2). Using the form of the spin tensor (<ref>), Eq. (<ref>) becomes 1/2S_μνΩ^μν=-𝒮Λ1AΛ^A 2λ. Plugging the explicit Lorentz transformation (<ref>) in this expression leads to 1/2S_μνΩ^μν=𝒮[α̇+cosϑ-1/√(1-u^2)ϕ̇+(1/√(1-u^2)-1)ψ̇]. The first term being a total derivative, it can be dropped out of the action. By inspection of this expression, we therefore find that pairs of canonical coordinates covering the spin sector are given by (ϕ,A) and (ψ,B), where the conjugate momenta A and B are respectively given by A =𝒮cosϑ-1/√(1-u^2), B=𝒮(1/√(1-u^2)-1). As expected, the expression above do not depend upon the gauge degrees of freedom α and v_z. In terms of the canonical coordinates, the components of the spin tensor in the background tetrad S_A B=ΛAAΛBB S_AB read S_0 1 =-𝒟[Acos(2ϕ-ψ)+(A+2B+2𝒮)cosψ], S_0 2 =𝒟[Asin(2ϕ-ψ)+(A+2B+2𝒮)sinψ], S_0 3 =2𝒟ℰcos(ϕ-ψ), S_1 2 =A+B+𝒮, S_1 3 =ℰsinϕ, S_2 3 =ℰcosϕ. Spin tensor in Witzany's symplectic coordinates where 𝒟≜-√(B(B+2𝒮))/2(B+𝒮), ℰ≜√(-A(A+2B+2𝒮)). The relations (<ref>) can be inverted to obtain A,B,ϕ and ψ as a function of the background tetrad components of the spin tensor. Using these relations, one can check explicitly that we have indeed build canonical coordinates, since the only non-vanishing Poisson brackets are x^μP_ν=δ^μ_ν, ϕA=ψB=1. Notice that this parametrization can be checked to consistently imply 𝒮^2_*=𝐒·𝐃=0, which shall be obeyd since a covariant SSC is assumed to hold. §.§ Ramond's coordinates In 2022, P. Ramond showed <cit.> that the spin tensor can be replaced (without any assumption regarding the existence of a spin supplementary condition) by the two Casimir invariants (<ref>) together with two pairs of symplectic coordinates (σ,π_σ) and (ζ,π_ζ). In terms of these coordinates, the background tetrad components of the spin tensor are <cit.> S_0 1 =Yπ_σsinζcosσ+Yπ_ζcosζsinσ+XZcosσ, S_0 2 =Yπ_σsinζsinσ-Yπ_ζcosζcosσ+XZsinσ, S_0 3 =Zπ_σ-XYsinζ, S_1 2 =π_σ, S_1 3 =Xsinσ, S_2 3 =Xcosσ, Spin tensor in Ramond's symplectic coordinates where X≜√(π_ζ^2-π_σ^2), Y≜√(1-𝒮^2/π_ζ^2-𝒮^4_*/π_ζ^2), Z≜𝒮^2_*/π^2_ζ. They are related to Witzany coordinates by a simple affine map (σ,π_σ)=(ϕ,A+B+𝒮), (ζ,π_ζ)=(ψ-ϕ+π/2,B+𝒮) and carry a nice physical interpretation in terms of respective orientations of the 3-vectors 𝐒 and 𝐃 and the spatial legs of the background tetrad frame, see <cit.> for explicit details. § CONSTRAINTS This section will briefly discuss the constraints that shall be taken into account when turning to the Hamiltonian description, and how they can be implemented at the level of the action. This is a rather technical subject, based upon results and ideas introduced in the context of geodesic motion in Chapter <ref>. We only review here the main ideas discussed in the literature. Technical exposition of the subject may be found in <cit.>. Two kinds of constraints shall be enforced to ensure the independence of the conjugate momenta of Hamiltonian formulation: the first one comes from the existence of redundant degrees of freedom in the spin sector and is directly related to spin supplementary conditions, as discussed in Section <ref>. It was first introduced in the case of a flat background in <cit.>, and subsequently expanded to generic curved background, see <cit.>. Moreover, the fact that the generic action described by Eq. (<ref>) is reparametrization invariant leads to the existence of a mass-shell constraint of the same type as in the geodesic case, as already pointed out in <cit.>. Altogether, these constraints can be implemented at the level of the action by considering S= ∫λ( p_μ v^μ+1/2 S_μνΩ^μν- H_D) with H_D=χ^μ𝒞_μ +λ/2ℋ and 𝒞_μ ≜ S_μν(p̂^ν+Λ0ν), ℋ≜ p^2+μ^2. Here, χ^μ and λ play the role of Lagrange multipliers enforcing the constraints 𝒞_μ≈0, ℋ≈ 0 to hold on-shell. An important point to notice is that 𝒞_μ≈ 0 is not the realization of any specific SSC. Actually, Λ0A plays the role of a gauge field, whose fixing corresponds to a specific choice of a particular covariant SSC. Choosing a specific SSC thus correspond to a gauge fixation. Examples of values of Λ0A leading to some of the most common SSCs are given in Table <ref>. When going beyond the linear order in the spin magnitude 𝒮, one of the main differences arising with respect to geodesic motion is that the mass μ^2 appearing in the mass-shell constraint ℋ≈ 0 is not a constant anymore, as was discussed in Section <ref>. For μ^2 depending in x^μ only through the metric and the Riemann tensor, we get back the quadrupole approximation of Chapter <ref>, with this time <cit.> J^μνρσ=3 p_α v^α/p^2μ^2R_μνρσ. In the case of the spin-induced quadrupole together with the TD SSC, this mass-shell constraint can be rewritten in a more enlightening way. As discussed in Section <ref> of the present thesis, there exists a mass-like quantity μ̃ which is conserved in that case. From its definition Eq. (<ref>), it is straightforward to see that one can perform the rewriting μ̃^2 =-(g^μν-κ/μ^2Θ^αβ Rμαβν)p_μ p_ν+𝒪(𝒮^3). Therefore, at 2, one can replace the mass-shell condition defined above by g̃^μνp_μ p_ν≈-μ̃^2, where we defined g̃_μν≜ g_μν-κ/μ^2Θ^αβ R_μαβν. Several formal developments regarding the constrained formulation have been worked out in the literature. In the case of a flat background, J. Steinhoff showed that the constraints (<ref>) were first class among themselves for a wide class of functional dependence of μ^2 <cit.>. The relation between shifts of the representative worldline and the enforcement of covariant spin supplementary conditions viewed as gauge fixations was explored in <cit.>. More recently, P. Ramond discussed in quite details the first class nature of TD condition for a generic curved background up to linear order in the spin expansion <cit.>. In the following, we will remain at a more pedestrian level. The next chapter will be devoted to the construction of covariant Hamiltonians for extended test bodies endowed with a spin-induced quadrupole. We will work under the TD condition and explicitly check that the proposed Hamiltonians will preserve it under time evolution. A more careful analysis of the exact status of constraints at second order in the spin expansion remains to be performed. This task will be left for subsequent works. CHAPTER: GENERALLY COVARIANT HAMILTONIANS This chapter will be devoted to the construction of covariant Hamiltonians describing the motion of extended test bodies over the 14-dimensional phase space introduced in the previous chapter. We will work up to second order in the spin magnitude included and restricting to the spin-induced quadrupole term, under the Tulczyjew-Dixon spin supplementary condition. The Hamiltonians that we will derive will therefore drive the motion only on the constraint surface where this condition is enforced. The most rigorous approach for deriving an Hamiltonian would have been to apply a Legendre transformation to the Lagrangian Eq. (<ref>), while taking care of the associated constraints. Since this Lagrangian is reparametrization invariant, one can convince ourselves that we would have ended up with an identically vanishing Hamiltonian as it was the case for the reparametrization invariant formulation of geodesic motion. The associated Hamiltonian action principle would have therefore be only composed of the constraints, which would have been rather not well-suited to our subsequent purposes. The construction of the Hamiltonians presented in this chapter will therefore follow another path, based on the more heuristic approach of W. Witzany et al. <cit.>. The original derivation was only aimed at reproducing pole-dipole MPD equations, including all orders in the spin magnitude. However, as argued in <cit.>, such an exact treatment of the pole-dipole is physically inconsistent with respect to the multipole expansion of the test body structure, since the equations of motion expanded at order n must account for the presence of the 2^n-pole moment. The treatment of <cit.> is therefore physically consistent when linearized in the spin magnitude. Otherwise, the Hamiltonian must account for the presence of higher order multipole moments. The treatment of the spin-induced quadrupole will be the task that will be carried out in this chapter. The basic idea of Witzany et al. is to notice that the TD momentum-velocity relation Eq. (<ref>) was actually providing us with the partial derivative of the (unknown) Hamiltonian with respect to the linear momentum. If one manage to integrate this expression, we end up with a candidate expression for being the Hamiltonian driving the motion on the constraint surface. However, there is a priori no guarantee that the guessed Hamiltonian will reproduce correctly the MPD equations and preserves the SSC. This shall be checked a posteriori. Section <ref> will derive computationally practical sufficient conditions that shall be obeyed for the validity of the Hamiltonian to be granted. Section <ref> will be devoted to the derivation of a candidate Hamiltonian, whose validity will be checked in Section <ref>. The linearized Hamiltonian is finally discussed in Section <ref>, and we end the discussion with a couple of remarks. Notice that the procedure used in this chapter is quite generic, and can be extended to derive covariant Hamiltonians valid for other covariant spin supplementary conditions and/or including higher order multipole moments. § HAMILTON AND MPD EQUATIONS The Hamilton equations for some generic Hamiltonian H(x^μ,p_μ,S^μν) are directly obtained from the Poisson brackets algebra Eq. (<ref>), and are given by Eq. (36) of <cit.>: x^μλ=Hp_μ, p_νλ+Hx^ν-HS^μκ(Γ^μ_νγS^γκ+Γ^κ_νγS^μγ)=-1/2R_νωλχHp_ωS^λχ, S^γκλ+Γ^γ_νλS^λκ+Hp_ν(Γ^κ_νλS^γλ) =HS^μν(g^γμS^κν-g^γνS^κμ+g^κνS^γμ-g^κμS^γν). Here, λ stands for an arbitrary time parameter. A natural question to be asked is then: for which choice(s) of the Hamiltonian H will these equation reproduce the MPD equations? Comparing Hamilton equations (<ref>) with MPD equations (<ref>) and (<ref>), one finds that the former will reduce to the latter provided that the following conditions hold: Hx^ν-HS^μκ(Γ^μ_νγS^γκ+Γ^κ_νγS^μγ)≈ -Γ^α_βνHp_βp_α-ℱ^ν, HS^μν(g^γμS^κν-g^γνS^κμ+g^κνS^γμ-g^κμS^γν)≈ p^γHp_κ-p^κHp_γ+ℒ^γκ, S^μνp_νH≈ 0. The last condition originates from the fact that we will seek for an Hamiltonian valid for the TD SSC, which must be preserved under time evolution of the system. Recall that the weak-equality symbol “≈” present in the three equalities above is defined as follows: Two quantities F and G valued on the phase space are said to be equal “on-shell” (which is denoted by the weak equality F≈ G) when they are equal on the constraint surface, i.e. the submanifold of the phase space where the constraints present in the action principle have been enforced. In the present context, * the TD spin supplementary condition S^μνp_ν≈0 holds; * the mass shell condition p_μ p_ν g^μν+μ^2≈ 0 (or equivalently p_μ p_νg̃^μν+μ̃^2≈ 0) is enforced. Notice that here, we do not require μ to be a constant. As discussed in the previous part of this thesis, the invariant mass μ is conserved for the pole-dipole linearized MPD equation, but not for the spin-induced quadrupole MPD equations. In both cases, it amounts to enforce an additional algebraic constraint between the dynamical variables. § CONSTRUCTION OF THE HAMILTONIAN We will now build an Hamiltonian generating the MPD equations and including the spin-induced quadrupole term. From now on, we set the time parameter λ to the proper time of the body τ. Our starting point is the relation between the four-velocity and the linear momentum for the TD condition: v^μ≈1/μ[p^μ+(D^μν-1/μℒ^μν)p_ν]+𝒪(𝒮^3). We recall the identities Dμν ≈1/2μ^2S^μλR_λνρσS^ρσ, ℒ^μνp_ν ≈κ/μΠ^μνR_ναβγΘ^αβp^γ+𝒪(𝒮^4). Using Eq. (<ref>) and the relations above, the relation (<ref>) takes the explicit form H_0p_μ ≈1/μp^μ+1/2μ^3S^μλR_λναβS^αβp^ν -κ/μ^3RμαβγΘ^αβ p^γ-κ/μ^5p^μ p^ν R_ναβγΘ^αβp^γ +𝒪(𝒮^3). We have here denoted the Hamiltonian H_0 because, as we will see later, this construction will lead to an Hamiltonian that is vanishing on-shell. Now, still following <cit.>, for any arbitrary functions G^μ and F of the dynamical variables x^μ, p_μ and S^μν, the following relations hold on-shell: p_μ[1/2(g^αβp_α p_β+μ^2)F] ≈ F p^μ, p_μ[G_α p_β S^αβ] ≈ G_α S^αμ, p_μ[S^νλR_λραβS^αβp^ρ] =S^νλRλμαβS^αβ, p_μ[1/2p_ν p_ρ RναβρΘ^αβ] ≈ RμαβρΘ^αβp^ρ. Making use of these identities and treating μ as being independent of the dynamical variables, one can infer the following form for the Hamiltonian: H_0 =1/2μ[(g^μν+2D^μν-κ/μ^2(g^ρσp_ρ p_σ/μ^2+2)RμαβνΘ^αβ)p_μ p_ν+μ^2] +𝒪(𝒮^3). Actually, this expression shall in principle be supplemented by an integration constant depending upon the variables x^μ and S^μν. However, this constant can be set to zero since the Hamiltonian (<ref>) already satisfies the conditions for generating the MPD equations, as will be checked in the next section. The final step is to rewrite our candidate Hamiltonian in a way that allows to replace the dependence in the non-constant dynamical mass μ by the constant shifted mass-like quantity μ̃. This will allow huge simplifications while checking the equations of motion. A conventional Taylor expansion can be used to show that 1/μ=1/μ̃(1+κ/2μ̃^2p^αΘ^βγp^δ R_αβγδ)+𝒪(𝒮^3). Using this expression as well as Eq. (<ref>), we end up with H_0 =1/2μ̃[(g̃^μν+2D^μν-κ/2μ̃^2(g^ρσ p_ρ p_σ/μ̃^2+1)RμαβνΘ^αβ)p_μ p_ν+μ̃^2] +𝒪(𝒮^3). Recalling that p_μ D^μν≈ 0, we directly see that this expression does vanish on-shell, H_0≈ 0+𝒪(𝒮^3). Notice that it is possible to obtain an Hamiltonian which does not vanish on-shell by shifting H_0 by a constant quantity. This operation leaves the dynamics of the system invariant, since Hamilton's equations only depend on the derivatives of the Hamiltonian with respect to the dynamical variables. Choosing the value of the shift to be equal to -μ̃/2, we end up with a new Hamiltonian H≜ H_0-μ̃/2 equal to H=1/2μ̃[(g̃^μν+2D^μν-κ/2μ̃^2(g^ρσ p_ρ p_σ/μ̃^2+1)RμαβνΘ^αβ)p_μ p_ν]+𝒪(𝒮^3). On-shell, this expression simply reduces to H≈1/2μ̃g̃^μνp_μ p_ν≈-μ̃/2. The Hamiltonian then reduces to (minus one half of) the conserved mass μ̃, which is the exact same situation than in the geodesic case (where we had H̅≈-μ/2, μ being conserved along the motion), as was discussed in Chapter <ref>. § CHECK OF HAMILTON EQUATIONS We will now plug our candidate Hamiltonians in Eqs. (<ref>), which will enable to prove that they will indeed generate the MPD equations. Actually, since the terms of the equations that are (in)dependent of κ shall vanish independently, one can split these equations in two parts, one containing the terms proportional to κ (originating from the quadrupole terms of the EOMs) and the other the ones that are independent of it (originating from the pole dipole part of the EOMs). It is therefore relevant to split the Hamiltonian as H=H_PD+κ H_Q, with H_PD=H_κ=0 and H_Q=κ H_κ=0. We find H_PD =1/2μ̃(g^μν+2D^μν)p_μp_ν, H_Q =-1/4μ̃^3(g^ρσp_ρ p_σ/μ̃^2+3)RμαβνΘ^αβp_μ p_ν and the verification of Eqs. (<ref>) will be valid for both Hamiltonians. Using decomposition (<ref>), Eqs. (<ref>) split in two sets of three equations, one corresponding to the pole-dipole part and one to the quadrupole part: H_PDx^ν+2H_PDS^μρΓ^[μ_νλS^ρ]λ≈-Γ^μ_νρH_PDp_ρp_μ, H_PDS^μν[g^ρμS^σν+(usual permutations)]≈ p^ρH_PDp_σ-(ρ↔σ), 2S^μ[ρp^σ]H_PDS^ρσ-Γ^ν_λκS^μλH_PDp_κp_ν-S^μνH_PDx^ν-μ̃^2 DμνH_PDp_ν≈ 0, H_Qx^ν+2H_QS^μρΓ^[μ_νλS^ρ]λ≈-Γ^μ_νρH_Qp_ρp_μ-ℱ_ν, H_QS^μν[g^ρμS^σν+(usual permutations)]≈[p^ρH_Qp_σ-(ρ↔σ)]+ℒ^ρσ, 2S^μ[ρp^σ]H_QS^ρσ-Γ^ν_λκS^μλH_Qp_κp_ν-S^μνH_Qx^ν-μ̃^2 DμνH_Qp_ν≈ 0. We will now check these equations one by one. Pole-dipole equations. Let us begin with the pole dipole equations, and compute the relevant derivatives of H_PD. Two useful preliminary identities are ∂_ν g_αβ=2Γ^λ_ν(αg_β)λ, ∂_ν g^αβ=-2Γ^(α_νλg^β)λ. These equations are straightforward consequences of the metric compatibility condition ∇_ν g_αβ=0. Consequently, 2μ̃H_PDx^ν =(∂_ν g^αβ+1/μ̃^2S^αλ∂_ν RλβγδS^γδ)p_α p_β ≈∂_ν g^αβp_α p_β =-2Γ^α_νβp_α p^β, 2μ̃H_PDp_ρ =2(g^ρνp_ν+D^ρνp_ν+p_μ D^μρ) ≈ 2(p^ρ+D^ρνp_ν), 2μ̃H_PDS^μρ ≈1/μ̃^2p_μ R_ρσγδS^γδp^σ. These identities can now be used to check Eqs. (<ref>)-(<ref>): * Eq. (<ref>): one has 2μ̃ (LHS) ≈-2Γ^α_νβp_α p^β -2p_μΓ^μ_νλD^λσp_σ≈ 2μ̃ (RHS); * Eq. (<ref>): we obtain 2μ̃ (LHS) ≈ p^[ρD^σ]λp_λ≈ 2μ̃ (RHS); * Eq. (<ref>): a similar computation shows that 2μ̃ (LHS) ≈ 0 ≈ 2μ̃ (RHS). Quadrupole equations. In a similar fashion, we show that -4μ̃^3H_Qx^ν ≈ -2/μ̃^2Γ^ρ_νλp_ρ p^λ RαβγδΘ^βγ p_α p_δ+∂_ν RαβγδΘ^βγ p_α p_δ +2Γ^κ_νλS^βλSγκRαβγδΘ^βγ p_α p_δ, -4μ̃^3H_Qp_ρ ≈2/μ̃^2p^ρ RαβγδΘ^βγ p_α p_δ+2RρβγδΘ^βγ p_δ, -4μ̃^3H_QS^μν ≈ 2 RαμγδSγν p_α p_δ. These relations can be directly used to verify equations (<ref>)-(<ref>). The computation is quite straightforward, and we will not reproduce it here because it is not particularly enlightening. The only technical point involved shows up while checking (<ref>). It consists in noticing that ∂_ν RαβγδΘ^βγ p_α p_δ =(∇_ν Rαβγδ -2Γ^α_νλRλβγδ+2Γ^λ_νβRαλγδ)Θ^βγ p_α p_δ. The first term of the RHS of this equation will cancel with the force term ℱ_ν, whereas the two other will respectively cancel with other terms involving Christoffel symbols. Our candidate Hamiltonian therefore reproduces correctly the MPD equations endowed with the spin-induced quadrupole term and preserves the TD spin supplementary condition along time evolution. § LINEARIZED HAMILTONIAN When discussing the integrability of MPD equations in next chapter, it will be of prime importance to know which Hamiltonian can be used to generate the linearized pole-dipole motion. It is given by the linearized version of (<ref>), which is simply H_lin=1/2μg^μνp_μ p_ν≈-μ/2. Therefore, when expressed in the non-symplectic coordinates, the linearized MPD equations are generated by the same Hamiltonian than the geodesic motion, the only (but huge) difference lying in the phase space and the Poisson brackets algebra being considered. This difference can be made manifest in the Hamiltonian by turning to quasi-symplectic coordinates. In terms of these variables, the linearized Hamiltonian takes the form H_lin=1/2μ(g^μν P_μ P_ν+P^μeAλe_Bλ;μS^AB). This linearized Hamiltonian is standard and has already been extensively used in the literature, see <cit.> and references therein. § CONCLUDING REMARKS We are now in possession of a covariant Hamiltonian generating the MPD equations under the TD condition in the spin-induced quadrupole approximation. The Hamiltonian valid at quadrupole order is given in Eq. (<ref>), whereas its pole-dipole linearized version is provided in Eq. (<ref>). These Hamiltonian will be at the heart of the two applications discussed in the last chapters of this thesis: the discussion of the non-integrability of MPD equations in Kerr spacetime and the investigation of the associated Hamilton-Jacobi equation. CHAPTER: NON-INTEGRABILITY OF THE MPD EQUATIONS IN KERR SPACETIME Non-integrability in Kerr spacetime In this chapter, we will explore the first application of the covariant Hamiltonian formalism for extended bodies in Kerr spacetime: the discussion of (Liouville) integrability of the motion. Integrability of MPD equations in Kerr and Schwarzschild spacetimes can be summarized by the following table <cit.>: The status of the proofs of the statements formulated in Table <ref> differs between the different cells: in Schwarzschild at linear order in 𝒮, the proof of integrability of MPD equations has been provided in several works, both by using non-covariant Hamiltonians <cit.> and covariant ones <cit.>. This is the only case where Liouville integrability still holds when departing from geodesic motion. Still in Schwarzschild, but at quadratic order in 𝒮, the breaking of integrability is suggested from numerical studies, see <cit.> and references therein. However, an analytical study of the breaking of integrability in this case is still missing. Turning to Kerr spacetime, the breaking of integrability at linear order in the spin magnitude 𝒮 has been demonstrated both from numerical perspective <cit.> and from analytical one <cit.>. The status as second order simply follows from the non-integrability already present at first order. In this chapter, we will review the analytical argument leading to the conclusion that linearized MPD equations in Kerr spacetime do not form an integrable system. In Part <ref> of this thesis, we have shown that – up to 2 corrections – the only non-trivial, polynomial quantities conserved for linearized MPD equations were the dynamical mass μ and the four invariants ℐ_A =(ℰ, ℒ, 𝒬_Y, 𝒬_R). Using the symplectic structure introduced in Chapter <ref>, a direct computation of the Poisson brackets between these quantities show that they fail to be in involution, since 𝒬_Y𝒬_R=1. This is the only non-vanishing bracket at linear order in 𝒮, and one can convince ourselves that there is no hope to find another combination of conserved quantities which are Poisson-commuting among themselves, thus suggesting that linearized MPD equations are not integrable in Kerr spacetime. This Chapter is organized as follows: Section <ref> will discuss the computation of the Poisson brackets between the conserved quantities ℐ_A, while Section <ref> briefly explores the consequences of perturbative non-integrability of a dynamical system. § NON-INTEGRABILITY OF THE LINEARIZED MPD EQUATIONS IN KERR SPACETIME From the discussion of Chapters <ref> and <ref>, the linearized MPD equations endowed with the Tulzcyjew-Dixon spin supplementary condition can be described by a N=5 Hamiltonian system, whose evolution is driven by the Hamiltonian H_lin=1/2μ(g^μν P_μ P_ν+P^μeAλe_Bλ;μS^AB)≈-μ/2. The counting of the independent degrees of freedom is advantageously performed using the symplectic coordinates introduced in Chapter <ref>. Without loss of generality, we will choose Witzany's coordinates for the discussion. We start from the full 14-dimensional phase space (N=7 Hamiltonian system). Turning to symplectic coordinates shows that the motion can be parametrized by only 12 variables x^μ,P_μ,ϕ,A,ψ,B, two two extra degrees of freedom corresponding to the Casimir invariants 𝒮^2 and 𝒮^2_*. We are then left with a 12-dimensional phase space (N=6 Hamiltonian system). Moreover, expressing the TD spin supplementary condition in an adapted background tetrad frame (see Section <ref>) allows to rewrite it as S^0A=0 in that frame. Comparing this requirement with Eq. (<ref>) implies that, under the TD SSC, the Hamiltonian will not depend upon the coordinate ψ, which is thus cyclical. Its conjugated moment B is therefore conserved, and we end up with a final 10-dimensional phase space (N=5 Hamiltonian system), as announced. We now stand in a comfortable position for discussing the integrability of the linearized MPD equations in Kerr spacetime. Since the Hamiltonian H_lin≈-μ/2 is itself a constant of the motion, the system will be integrable only if the four independent constants of the motion given in Eq. (<ref>) are in involution: ℐ_Aℐ_B=0, ∀ A,B∈ 1,…,N. The results of the computations of these brackets are summarized in Table <ref>. We begin by deriving an useful result about Poisson brackets: let f(X) be an analytic function of some dynamical quantity X such that the coefficients f_n of the Taylor expansion f(X)=∑_n=0^+∞f_nX^n/n! are constant, and let Y be a dynamical quantity such that XY≠ 0. Then, from the Leibniz rule for Poisson brackets we have X^nY=nX^n-1XY and one has the chain rule property f(X)Y =∑_n=0^+∞f_n/n!X^nY =∑_n=1^+∞f_nX^n-1/(n-1)!XY =fXXY. This relation is easily generalized to any analytic function of n variables X^α (α=1,…,n): f(X^α)Y =fX^λX^λY. §.§ ℐ_Aℰ-type brackets §.§.§ ℐ_𝐀=ℒ Using the identities ∇_αξ^μ=Γ^μ_α t and ∇_αη^μ=Γ^μ_αφ, the bracket reads ℒℰ =p_tp_φ+1/2∇_αξ_βS^αβp_φ -1/2∇_αη_βS^αβp_t-1/4∇_αη_β∇_γξ_δS^αβS^γδ =-1/2R_tφαβS^αβ+(∇_αη_βΓ ^α_λ t-∇_αξ_βΓ ^α_λφ)S^λβ+∇_αη^λ∇_λξ_β S^αβ =-1/2R_tφαβS^αβ-∇_αη^λ∇_λξ_β S^αβ =0. The last equality follows from the fact that the axisymmetry of Kerr spacetime together with the definition of the Riemann tensor enforce the relation R_tφαβS^αβ =2Γ^α_tλΓ^λ_φβSαβ=-2∇_αη^λ∇_λξ_β S^αβ to hold. §.§.§ ℐ_A=𝒬_Y The axisymmetric character of Kerr spacetime allows to consider any background tetrad eAμ to be independent of t and φ. This yields p_t,φeAμ=-∂_t,φeAμ=0. Consequently, p_t,φS^AB=p_t,φS^μνeAμeBν. Using this last equation and the fact that 𝒬_Y commute with x^μ, we get 𝒬_Yℰ =p_t𝒬_Y+1/2∇_A ξ_BS^AB𝒬_Y (<ref>)=-4[r(Γ^[1_At-∇_A ξ^[1)S^0]A+acosθ(Γ^[3_At-∇_Aξ^[3)S^2]A] =0. §.§.§ ℐ_Â=𝒬_R Using the identity ℰ_0ℰ =1/2∇_αξ_βΓ^α_λ tS^λβ=-1/2∇_βξ_α∇_λξ^α S^λβ=0 and the chain rule (<ref>), one has 𝒬_Rℰ =2p_μ K^μνp_νℰ-2μS^α∂_α𝒵ℰ =-p_μ K^μν(2p_νp_t+p_ν∇_αξ_β S^αβ) +μ(2S^α∂_α𝒵p_t+S^α∂_α𝒵∇_μξ_ν S^μν). Finally, making use of the identity R_tβμνS^μν =∂_β(∇_μξ_ν)S^μν+2∇_αξ_νΓ^α_βλS^νλ, the two first Poisson brackets of this expression can be shown to cancel mutually, and we are left with 𝒬_Rℰ =1/2ϵ^αβγδS_γδ ∂_α𝒵[R_tβμν-∂_β(∇_μξ_ν)]S^μν =ϵ^αβγδ∂_α𝒵 S_γδ∇_ρξ_νΓ^ρ_λβS^νλ =𝒪(𝒮^2). §.§ ℐ_Aℒ-type brackets The computations are identical to the ℐ_Aℰ-type case, but with ξ^α→η^α. We consequently find 𝒬_Yℒ =0, 𝒬_Rℒ=𝒪(𝒮^2). §.§ ℐ_A𝒬_Y-type brackets The final bracket to be computed takes the form 𝒬_R𝒬_Y =2p_μ K^μνp_ν𝒬_Y-2μ ∂_α𝒵S^α𝒬_Y =-2p_μ K^μν(∂_ν Y^*_αβ S^αβ+2Γ^α_ρνS^βρY^*_αβ) -4ϵαβγδ∂_α𝒵 p_β Y*γνS^δν+𝒪(𝒮^2) =-2p_μ K^μν∇_ν Y^*_αβS^αβ-2ϵ^αβγδ∂_α𝒵 p_βϵ_γνρσY^ρσSδν+𝒪(𝒮^2) =4p_μ S^αβ(Kμαξ_β+YμαYλβξ_λ)+𝒪(𝒮^2). This expression is generally non-vanishing at order 𝒪(𝒮). After analysis, it is thus found that only the Poisson bracket 𝒬_Y𝒬_R is non-vanishing at order 𝒪(𝒮), as displayed in Table <ref>. The four linearly independent first integrals ℐ_A are consequently not in involution at the linear level, and the linearized MPTD equations do not form an integrable system in the sense of Liouville. § NON-INTEGRABILITY: IS THAT SO BAD? Contrarily to geodesic motion, extended test body motion is not anymore integrable in Kerr spacetime, even at linear order in 𝒮. As discussed in the introduction of this thesis, one of the main benefits of integrability of geodesic motion was that it allowed to turn to action-angle formulation (see Chapter <ref>), thus making explicit the fundamental frequencies of the bounded motion, which can subsequently be used for building models of inspiralling self-forced motion <cit.>. Actually, since the integrability is only broken by a small non-integrable perturbation, we can expect that not so much has been lost! In particular, one should expect the very powerful result known as the Kolmogorov–Arnold–Moser (KAM) theorem to apply. The exact statement of this theorem is rather involved <cit.> and its description goes beyond the scope of this thesis. An intuitive formulation can be found in <cit.>, and takes the following form: If the bounded motion of an integrable Hamiltonian H_0 is disturbed by a small perturbation Δ H that makes the total Hamiltonian H=H_0+Δ H non-integrable and provided that: * the perturbation Δ H is small, and * the fundamental frequencies of H_0 are incommensurate, then the motion remains confined on a N-torus, excepted for a negligible set of initial conditions. In our case, H_0 would be the geodesic Hamiltonian while Δ H is the first order correction induced by the spin. There is no rigorous proof that KAM theorem applies to the system of MPD equations in Kerr, but typical features of KAM behaviour were numerically observed in Schwarzschild spacetime <cit.>. Such features are also expected to show up in Kerr spacetime. The main consequence of KAM theorem is that the appearance of chaotic motion is confined to a negligible part of phase space, namely the one whose initial conditions lead some of the geodesic fundamental frequencies to be commensurate one to another (resonances). In the remaining part of phase space, the motion still exhibits a periodic-like behaviour, and one expects to still be able to compute the associated fundamental frequencies. Still in a perturbative spirit, these frequencies may be though as perturbations of the geodesic fundamental frequencies discussed in Chapter <ref>. Actually, the spin-induced corrections to the fundamental frequencies of the motions were computed by W. Witzany at first order in 𝒮 <cit.> by studying the Hamilton-Jacobi formulation of the spinning test body problem in Kerr spacetime, to which the next chapter is devoted. In principle, the knowledge of these frequency shifts would allow to account for the finite-size nature of test bodies in evolution schemes describing self-forced EMRI's motion, such as the two-timescale expansion <cit.> (see also <cit.>). CHAPTER: HAMILTON-JACOBI EQUATION In this final chapter, we analyze the Hamilton-Jacobi equation for extended test bodies in Kerr spacetime and its relation with the constants of motion discussed in Part <ref> of the present thesis. As before, we restrict to the spin-induced quadrupole approximation and work under the TD spin supplementary condition. Contrarily to the geodesic case, Hamilton-Jacobi equation for spinning test bodies is not separable in Kerr spacetime. Actually, W. Witzany showed that <cit.>, at linear order in the spin magnitude, the terms that break the separability can be neglected everywhere except near the radial and polar turning points of the associated (zeroth order in the spin) geodesic motion. In this so-called “swing region”, a perturbatively separable solution can be provided, and the separation constant arising in the analysis turns out to be precisely Rüdiger's extension of Carter constant. Moreover, an explicit solution to Hamilton-Jacobi equation valid both in the swing region and near the turning points can be built. This chapter is structured as follows: Section <ref> will derive the explicit form of the Hamilton-Jacobi equation. Section <ref> will discuss Witzany's solution at linear order in the spin magnitude. Section <ref> will discussed the interpretation of the separations constants in terms of the constants of motion of Part <ref>. Finally, Section <ref> will provide some preliminary results concerning the extension of the solution at second order in the spin, namely a form of an explicit solution in the swing region valid regardless the value of the quadrupole coupling constant κ. Moreover, a conjecture concerning the expected form of the solution for the peculiar value of the coupling κ=1 will be proposed. § HAMILTON-JACOBI EQUATION IN THE SPIN-INDUCED QUA-DRUPOLE CASE The method for building Hamilton-Jacobi equation has been reviewed in the geodesic case in Chapter <ref>. We shall first turn to symplectic coordinates. In order to stick to the original derivation <cit.>, we will choose Witzany's coordinates (x^μ,P_μ,ϕ,A,ψ,B). In terms of these variables, the Hamiltonian (<ref>) reads[Again, we have dropped the underlines in the background tetrad indices.] H =μ̃/2[ g^μν U_μ U_ν+U^μω_μ ABS^AB+1/4(ω_μ ABS^AB)^2 +κ(eAμs^A BR BνCDs^CD-(g^ρσU_ρ U_σ+2)RμABνθ^AB)U_μ U_ν+1] +𝒪(𝒮^3). In order to get rid of the constant μ̃ in the equations, we have introduced the rescaled quantities U_μ≜P_μ/μ̃, s^μν≜S^μν/μ̃, θ^μν≜Θ^μν/μ̃^2. We now stand in a comfortable position to write down the Hamilton-Jacobi equation. Here, the canonical coordinates are (x^μ,ϕ,ψ), and (U_μ,A,B) their respective conjugate momenta. The Hamilton-Jacobi equation is a first order PDE for Hamilton's characteristic function W(x^μ,ϕ,ψ) (abbreviated as “the action” in the continuation of the chapter) which takes the explicit form 1+g^μνW_,μW_,ν+W^,μω_μ ABs^AB+1/4(ω_μ ABs^AB)^2 +κ[eAμs^ABRBνCDs^CD-(g^ρσW_,ρ W_,σ+2)RμABνθ^AB]W_,μ W_,ν=𝒪(𝒮^3) From here, we always understand all the occurrences of the spin tensor to be computed using Eq. (<ref>) together with the substitutions A→ W_,ϕ and B→ W_,ψ. These substitutions originate from the fact that, while deriving the Hamilton-Jacobi equation, one shall substitute the conjugate momenta by the derivatives of the action with respect to their respective coordinates. See Chapter <ref> for more details. We will now attempt to solve this equation perturbatively, order by order in the spin. § WITZANY'S SOLUTION AT FIRST ORDER We will now look at the first order in the spin Hamilton-Jacobi equation, which has been worked out recently by Witzany <cit.>. At this order, Eq. (<ref>) reduces to 1+g^μνW_,μW_,ν+W^,μω_μ ABs^AB=𝒪(𝒮^2). As we will see, the status of the separability of this equation is rather subtle at this order in the perturbative development. §.§ Geodesic-adapted tetrad The first step is to chose a specific form for the background tetrad eAμ. We will use Witzany's “geodesic adapted" tetrad <cit.>, which has been independently studied by Marck in 1983 <cit.>. This tetrad is constructed as follows: its timelike leg is taken tangent to a geodesic congruence of parameters (E_c,L_c,K_c) (see Section <ref>), i.e. e_0 μ=u_cμ with u_ct=-E_c, u_cr=±_r√(R(r;E_c,L_c,K_c))/Δ, u_c x=±_x√(Θ_x(x;E_c,L_c,K_c))/1-x^2, u_cφ=L_c. The three spatial legs of the tetrad are defined as e_1μ =1/N_(1)(K_μν+K_c g_μν)u_c^ν, e_2μ =1/N_(2)(K_μν-K_c^(2)/K_cg_μν)Yνλu^λ_c, e_3μ =1/√(K_c)Y_μνu^ν_c, with K_μν ≜ Y_μλYνλ, K_c^(2) ≜ K_μνKνρu_c^μ u_c^ρ, N_(1)^2≜ K_c^(2)+K_c^2, K_c^(3) ≜ K_μνKνρKρσu_c^μ u_c^σ, N_(2)^2≜ K_c^(3)-(K_c^(2))^2/K_c and where Y_μν is Kerr's Killing-Yano tensor. A direct computation shows that this tetrad is indeed orthonormal. Moreover, it is defined up to the signs ±_r,x that shall be specified and depend on the specific geodesic that is followed. Finally, we notice that the tetrad is only well-defined away from the turning points of the geodesic congruence, where one has e_0 r=e_0θ=0. This observation will be essential for the following, and will enforce us to solve the problem two times, both near and away from the turning points of the congruence. A crucial property of this tetrad is the remarkable simple form of the projection of the connection 1-form onto its timelike leg. As for any orthogonal tetrad, one has of course ω_0AB≜e_AμeBμ;νe0ν=-e_BμeAμ;νe0ν=-ω_0BA, but the only non-vanishing component of this projection is given by ω_012=-ω_021=√(K_c)/Σ(P(E_c,L_c)/r^2+K_c+aL_c-aE_c(1-x^2)/K_c-a^2x^2). Recall that P(E,L)≜ E(r^2+a^2)-aL. As one can notice, except for the factor 1/Σ which is easy to get rid of, the RHS of this equation is separated with respect to the variables r and x. This fact, together with the experience we have gained while discussing the geodesic problem in Chapter <ref>, will enable us to provide efficiently a solution to the first order problem away from the radial and polar turning points of the congruence. §.§ Solution in the swing region Let us first formalize the idea of “being away from the turning points of the congruence”. Denoting respectively y_t (y=r,x) the turning points of the geodesic congruence, one has R(r)∝ (r-r_t), Θ_x(x)∝ (x-x_t) near the turning points. As detailed in next section, this will lead to a ∝(y-y_t)^-1/2 divergence in the spin connection terms involved in the Hamilton-Jacobi equation. In order to avoid this complication during a first time, we will seek a solution in the so-called swing region, defined by r-r_t≫𝒮, rx-x_t≫𝒮. In this region, the only terms of the Hamilton-Jacobi equation that will scale as powers of the spin parameters are those that involve explicitly occurrences of the spin tensor. We will look for a solution to the Hamilton-Jacobi equation that takes the form W^(1)=W^(0)+𝒪(𝒮). At first order, Eq. (<ref>) becomes simply 1+g^μνW^(1)_,μW^(1)_,ν+W^(0),μω_μ ABs^AB=𝒪(𝒮^2). To go further on, we set the constants of motion of the geodesic congruence to be close to the ones of the associated zeroth order geodesic motion, E_c-E_0∼ L_c-L_0∼ K_c-K_0∼𝒪(𝒮) and we choose the same signs ±_r,x for both the zeroth order action and the geodesic congruence. It allows to write W^(0)_,μ=u_cμ+𝒪(𝒮)=e_0μ+𝒪(𝒮). Using Eq. (<ref>) allows to rewrite the spin connection term as W^(0),μω_μ ABs^AB=2√(K_c)/Σ(P(E_c,L_c)/r^2+K_c+aL_c-aE_c(1-x^2)/K_c-a^2x^2) ×(W^(1sw)_,ϕ+W^(1sw)_,ψ+s)+𝒪(𝒮). Moreover, since we assume that the TD condition holds and given the choice of tetrad and the discussion of Section <ref> about SSCs in adapted tetrads, it is rather easy to convince ourselves that S^0A=𝒪(𝒮^2). Comparing this results with Eq. (<ref>), one notices that the explicit form of the spin tensor is independent of ψ, which is consequently a cyclic coordinate. In particular, one has W^(1)_,ψ=𝒪(𝒮^2), and the action is therefore independent of ψ at this order of the perturbative expansion. Mimicking Carter's procedure for the geodesic case, we therefore make the following Ansatz for the swing region action: W^(1)=-E_ t+L_φ+(s_∥-s) ϕ+w_1r(r)+w_1x(x). The separation constants are chosen to be labelled with subscripts `so' (spin-orbit). Plugging this Anstaz into the Hamilton-Jacobi equation yields aΣ(1+g^μνW^(1)_,μW^(1)_,ν) +2s_∥√(K_c)(P(E_c,L_c)/r^2+K_c+aL_c-aE_c(1-x^2)/K_c-a^2x^2)=𝒪(𝒮^2). A direct comparison of this equation with the one arising in the zeroth order computation allows us to use the very same procedure for the separation, with one additional term arising from the connection appearing in each separated part: (1-x^2)(w'_1x)^2 =Q_-x^2[a^2(1-E_^2)+L_^2/1-x^2] -2as_∥√(K_c)L_c-aE_c(1-x^2)/K_c-a^2x^2, Δ^2(w'_1r)^2 =-(K_+r^2)Δ+P^2(E_,L_) -2s_∥Δ√(K_c)P(E_c,L_c)/K_c+r^2. This provides the full first order solution in the swing region. For now, E_, L_, K_ and s_∥ are just some separation constants, and Q_ is defined in the same way than Q_0. Their physical meaning will be investigated in the last section of the present chapter. §.§ Solution near the turning points Near the turning points y_t of the geodesic congruence, the geodesic-adapted tetrad becomes ill-defined since some of its legs vanish. Nevertheless, this problem can be overcome in the following way: first, notice that the swing region solution remains valid even if we shift the constants E_c, L_c and K_c by an 𝒪(𝒮) quantity. Therefore, one can always choose these constants to avoid the turning points by an 𝒪(𝒮) distance. Nevertheless, the price to pay is that some other quantities in the Hamilton-Jacobi equation will scale with the spin parameter near the turning points. The various terms that acquire such a scaling are listed in Table <ref>. The main consequence of this new scaling behaviour is that Hamilton-Jacobi equation will now include some new terms. These corrections will break its separability. Nevertheless, a solution can still be provided, as we will demonstrate now. Explicitly, the equation to be solved becomes <cit.> 1+g^μνW^(1)_,μW^(1)_,ν+ω_0ABs^AB +∑_y=r,x[(W^(1)_,y-e_0 y)ω_y AB s^AB+1/4(ω_y ABs^AB)^2]g^yy=𝒪(𝒮^2). Since the first line of this equation is already solved by the swing region solution, it is reasonable to make the following Ansatz near the turning points: * The action behaves as in the swing region with respect to ϕ and ψ. Therefore, one can replace the occurrences of the spin tensor s^AB in the equations with s̃^AB≜ s^AB(A= s_1∥-s,B= 0). The only non-vanishing components up to 𝒪(𝒮^2) are s̃^12 =-s̃^21=s_1∥, s̃^13 =-s̃^31=√(s^2-s_1∥^2)sinϕ, s̃^23 =-s̃^32=√(s^2-s^2_1∥)cosϕ. * The action valid both near and far away from the turning points is assumed to take the form W^(1)=W^(1)+∑_y=r,xδ_y W^(1). Here, the corrections δ_y W^(1) are assumed to scale as 𝒮^2/√(y-y_t). They are therefore relevant in the turning points region, whereas they are only contained in higher-order terms in the swing regions (see Table <ref>). Plugging this Ansatz into the Hamilton-Jacobi equation allows to reduce our problem to the following sets of two equations (δ_y W^(1)_,y)^2+2 S'_1yδ_y W^(1)_,y+ω_yAB s^AB(S'_1y+δ_y W^(1)_,y-e_0 y) +1/4(ω_yABs̃^AB)^2=0, for y=r,x. These equations are easily solved by quadrature for δ_y W^(1t)_,y. When the dust settles, we obtain δ_y W^(1)_,y=-w'_1y-1/2ω_yABs̃^AB±_y√((w_1y')^2+e_0 yω_yABs̃^AB). This solution can be integrated as δ_y W^(1)=-w_1y+∫(-1/2ω_yABs̃^AB±_y√((w_1y')^2+e_0 yω_yABs̃^AB)) y+C_y, where C_r=C_r(ϕ,x) and C_x=C_x(ϕ,r) are integration constant that will be set to zero in the continuation of this work. At the end of the day, we obtain the following form of the action at first order in the spin, which is valid both near and far away of the turning points of the congruence: W^(1) =-E_t+L_φ+(s_∥-s)ϕ + ∑_y=r,x∫(-1/2ω_yAB s^AB±_y√((w_1y')^2+e_0 yω_yAB s̃^AB))y Hamilton-Jacobi equation solution at first order in the spin § THE SEPARATION CONSTANTS We will now discuss the physical interpretation of the separation spin-orbit separation constants E_, L_, K_ and s_∥ that showed up in our discussion. As we will show, these constants are directly related to the constants of motion arising from the symmetries of the background geometry that were studied in Part <ref> of the present text. §.§ Spin-orbit energy and angular momentum First notice that, since W_,μ=U_μ=p̂_μ-1/2ω_μ ABs̃^AB, one has the cornerstone identity p̂_μ=W_,μ+1/2ω_μ ABs̃^AB. Using Eq. (<ref>) to express the quantities E_0 and L_0, we directly get the following relation for the spin-orbit energy and angular momentum: E_ =E_0+1/2ω_t ABs̃^AB, L_ =L_0-1/2ω_φ ABs̃^AB. A simple computation can show that these quantities are precisely equal to the conserved quantities for MPD equations related to the existence of Killing vector fields. Recall that, for any Killing vector ξ^μ of the background, the quantity 𝒞_ξ≜ξ_μp̂^μ+1/2∇_μξ_ν s^μν is conserved at any order of the multipole expansion. Let us now particularize this result to Kerr's timelike and axial Killing vectors ξ=∂_t,∂_φ. In this case, one has ∇_μξ_ν =g_νρ(∂_μξ^ρ+Γ^ρ_μσξ^σ) =Γ_νμσξ^σ, where the last equality follows from the fact that the components (∂_t)^μ=(1,0,0,0) and (∂_φ)^μ=(0,0,0,1) are constant in Boyer-Lindquist coordinates. Making use of this result and of identity (7.142) of <cit.> to write the Christoffel symbols in terms of the connection 1-forms, we get the final expression 𝒞_ξ=ξ_μp̂^μ-1/2ω_μ ABξ^μs̃^AB. Using this result to compare E=-C_∂_t and L=C_∂_φ with the spin-orbit separation constants (<ref>), we directly find that E=E_, L=L_. §.§ Spin projection s_∥ We now turn to the spin separation constant s_∥. One the one side, one can write s_∥ =s̃^12 =1/2ϵ^30ABs̃_AB =1/2e_3μe_0νϵ^μνρσs̃_ρσ =1/2√(K_c)Y_μλ u_c^λ u_cνϵ^μνρσs̃_ρσ. On the other side, Y_μλ u_c^λ u_cνϵ^μνρσs̃_ρσ =-Y^**_μλ u_c^λ u_cνϵ^μνρσs̃_ρσ =-1/2ϵ_μλαβϵ^μνγδY^*αβu_c^λ u_cνs̃_γδ =-Y^*_μνs̃^μν =-𝒬_Y, where 𝒬_Y is the Rüdiger's linear invariant discussed in Part <ref>. Comparing the two previous equations and still choosing K_c=K_0+𝒪(𝒮^2), we get s_∥=-1/2𝒬_Y/√(K_0)+𝒪(𝒮^3). Two remarks can be made before closing this discussion. First, let us define the specific angular momentum vector ℓ^μ≜ Y^μνu_cν. The terminology makes sense, since Carter's constant obeys K_0=ℓ_αℓ^α and can be roughly interpreted as some sort of “orbital angular momentum squared") (see e.g. <cit.>). Moreover, since 𝒬_Y=-2ℓ_α s^α. We get the enlightening expression s_∥=ℓ_αs̃^α/√(ℓ_αℓ^α)+𝒪(𝒮^2). The separation constant s_∥ can thus be understood as the projection of the spin vector s̃^μ onto the direction of the specific orbital angular momentum vector ℓ^μ. Second, a simple computation allows to show that the background tetrad components of the normalized spin vector are s̃^A=(0, √(s^2-s_∥^2)cosϕ, -√(s^2-s^2_∥)sinϕ, s_∥)+𝒪(𝒮^2) We therefore see that the cases s_∥=± s correspond to the spin being totally (anti-)aligned with the angular orbital momentum, since e3μ∝ℓ_μ. Moreover, the canonical coordinate ϕ can be viewed as a precession angle of the spin around this axis. §.§ Generalized Carter constant K_ The evaluation of the separation constant K_ is the most tedious, and will not be discussed in full details here, since the computations are not particularly enlightening. The basic idea consists, on the one side, to make use of Eqs. (<ref>) and (<ref>) to write a closed form expression for K_ in terms of r, x, p_r, p_x and of the constants of motion. On the other side, Rüdiger's quadratic invariant can be written 𝒬_R=K_0+2 s^α∂_α𝒵+E_0𝒬_Y. Computing explicitly the second term of this expression allow to show that the two constants do coincide <cit.>, K_=𝒬_R. The separation constant of the Hamilton-Jacobi equation at first order in the spin expansion is therefore precisely Rüdiger's quadratic invariant. To summarize the discussion, all the constants appearing in the swing region separation of Hamilton-Jacobi equation at linear order in the spin magnitude 𝒮 are in direct correspondence with the conserved quantities found by explicitly solving the conservation constraint equations of Part <ref>. § SOLUTION AT SECOND ORDER IN THE SWING REGION Given the discussion of last section, it is natural to wonder what happens at second order in the spin. This section will provide partial results at this order, and we will end by conjecturing the expected relation between Hamilton-Jacobi formulation and conserved quantities of Part <ref>. At second order in 𝒮, Hamilton-Jacobi equation is given by the following mess 1+g^μνW_,μ^(2)W_,ν^(2)+W^(1),μω_μ ABs^AB+1/4(ω_μ ABs^AB)^2 +κ[eAμs^ABRBνCDs^CD-(g^ρσW^(0)_,ρ W^(0)_,σ+2)RμABνθ^AB]W^(0)_,μ W^(0)_,ν =𝒪(𝒮^3). The second line contains only terms involving occurrences of the Riemann tensor. Using the TD SSC s^0A=𝒪(𝒮^2), it can be simplified as follows: [eAμs^ABRBνCDs^CD-(g^ρσW^(0)_,ρ W^(0)_,σ+2)RμABνθ^AB]W^(0)_,μ W^(0)_,ν =[eAμs^ABRBνCDs^CD-(g^ρσe_0ρe_0σ+2)RμABνθ^AB]e_0μe_0ν+𝒪(𝒮^3) =R_0A0Bθ^AB+𝒪(𝒮^3). In order to solve this equation in the swing region, we mimic the first order turning region procedure. We make the following Ansatz W^(2)=W^(1)+∑_y=r,xδ_yW^(2) and still consider the replacement of the spin tensor by its tilded version. Moreover, we choose the congruence constants as E_c-E_0∼ L_c-L_0∼ K_c-K_0∼𝒪(𝒮^2) ⇔ W^(0)_,μ=e_0μ+𝒪(𝒮^2). Hamilton-Jacobi equation then becomes 1+g^μνW^(1)_,μW^(1)_,ν+W^(0),μω_μ ABs̃^AB +∑_y=r,x[(δ_y W^(2)_,y)^2+2w'_1yδ_y W^(2)_,y+F^(2)_y]g^yy=𝒪(𝒮^3). Here, F^(2)_y=F^(2)_y(ϕ,r,x) denote two functions that satisfy to the algebraic equation ∑_y=rxg^yyF^(2)_y=1/4(ω_μ ABs̃^AB)^2 +(W^(1),μ -W^(0),μ)ω_μ ABs̃^A B+κ R_0A0Bθ̃^AB. This equation is always solvable, and its simplest solution reads F^(2)_y=g_yy/2 (RHS), where `RHS' denotes the RHS of Eq. (<ref>). Nevertheless, this splitting is in principle tunable to improve the properties of the solution. Eq. (<ref>) is satisfied provided that, for both y=r and x: (δ_y W^(2)_,y)^2+2w'_1yδ_y W^(2)_,y+F_y^(2)=0. The solution of this second order algebraic equation is simply δ_y W^(2)_,y=-w'_1y±_y√((w'_1y)^2-F_y^(2)(ϕ,r,x)), which integrates as δ_y W^(2)=-w_1y±_y∫√((w'_1y)^2-F_y^(2)(ϕ,r,x)) y. Here, we have set the possible integration constants to zero. The full second order swing region solution is W^(2) =-E_ t+L_φ+(s_∥-s)ϕ +∑_y=r,x∫±_y√((w'_1y)^2-F_y^(2)(ϕ,r,x)) y. This solution is no more separated for generic values of κ. However, for the specific case κ=1, the existence of a deformation of Rüdiger's constant 𝒬^(2)_BH and the fact that – both at zeroth and first order – the constants K_0 and K_ play the role of separation constants of the associated Hamilton-Jacobi equations lead us to propose the following conjecture: In Kerr spacetime, at quadratic order in the spin parameter 𝒮, Hamilton-Jacobi equation Eq. (<ref>) endowed with TD spin supplementary condition and spin-induced quadrupole moment is separable in the swing region of phase space. The separated solution has the same form as at linear order, but the separation constant is now the quasi invariant 𝒬^(2)_BH given in Eq. (<ref>). All the others constants appearing in the solution (E_, L_ and s_∥) are left unchanged. On the technical level, the proof of this conjecture shall consist in the possibility of finding separated forms for the source functions of Eq. (<ref>), namely F^(2)_y(ϕ,r,x)=F^(2)_y(y). Identities such as (3.4.20) of <cit.> allow in principle to express the Riemann tensor only in terms of quadratic combinations of the connection 1-form and of its first covariant derivative. We believe that this fact will allow to dramatically simplify the RHS of Eq. (<ref>), the Riemann term then having the right κ=1 coefficient for simplifying with the quadratic connection term, yielding the separability property announced. This conjecture remains to be investigated in full details. Finally, notice that the knowledge of the solution to Hamilton-Jacobi equation allows to compute the spin-induced shifts in both the fundamental frequencies and the orbital turning points of the motion. This has been discussed by W. Witzany in <cit.>, up to first order in 𝒮. The computation of the corresponding shifts at 2 would first require a more complete understanding of the solution of Hamilton-Jacobi equation at quadratic order in 𝒮. This remains to be investigated in coming works. §.§.§ Solution near the turning points Near the turning points of the congruence, we need to perform a trick similar to the one used at first order in order to obtain a valid solution. Observe that, it one performs a shift in the congruence constants of order 𝒪(𝒮^1+ζ) (ζ∈]0,1[), the solution W^(2) remains unchanged and avoids the turning points of the congruence by a distance y-y_t∼𝒪(𝒮^1+ζ). This time, the congruence terms scale as e_Aλ;yeBλ∼𝒪(𝒮^-1+ζ/2) ⇔ e_Aλ;yeBλs^A B∼𝒪(𝒮^1-ζ/2) Notice that this choice of scaling for the congruence constants is tuned to both (i) keep the swing region solution valid and let it avoid the turning points without any modification of its expression and (ii) obtain congruence terms that still scale as a positive power of the spin. This time, we have to different types of modifications arising near the turning points: * Terms changing of scaling: the quadratic connection term 1/4(e_Aλ;μeBλs̃^A B)^2 scales now as 𝒪(𝒮^1-ζ), instead of 𝒪(𝒮^2) as it was the case in the swing region. This has no influence on the resolution of our problem. * New correction terms: we still assume s^A B→s̃^A B and we make the following Ansatz: W^(2)=W^(2)+∑_y=r,xδ_y W^(2), where we assume that δ_y W^(2)∼𝒪(𝒮^ξ), ξ>0. The Hamilton-Jacobi equation then becomes 1+g^μνW^(2)_,μW^(2)_,ν-W^(1),μe_Aλ;μeBλs̃^A B+1/4(e_Aλ;μeBλs̃^A B)^2+R_0 A 0 Bθ̃^A B +∑_y=r,xg^yy[(δ_y W^(2)_,y)^2+2δ_y W^(2)_,y W^(2)_,y-(δ_y W^(2)_,y+δ_y W^(2)_,y)e_Aλ;yeBλs̃^A B] =𝒪(𝒮^3). The first line corresponds to the swing region second order solution, whereas the second line leads to the following constraint for the correcting pieces: (δ_y W^(2)_,y)^2+2δ_y W^(2)_,y(W^(2)_,y-1/2e_Aλ;yeBλs̃^A B)-δ_y W^(2)_,ye_Aλ;yeBλs̃^A B=0. It is solved by δ_y W^(2)_,y =-W^(2)_,y+1/2e_Aλ;yeBλs̃^A B ±_y√((W^(2)_,y-1/2e_Aλ;yeBλs̃^A B)^2+δ_y W^(2)_,ye_Aλ;yeBλs̃^A B). At the end of the day, the second order solution valid both near and far away from the turning points reads W^(2) =-E_ t+L_φ+(s-s_∥)ϕ+∑_y=r,x∫ y[1/2e_Aλ;yeBλs̃^A B ±_y√((W^(2)_,y-1/2e_Aλ;yeBλs̃^A B)^2+δ_y W^(2)_,ye_Aλ;yeBλs̃^A B)]. conclusion tocpartConclusion and Outlook PART: *Conclusion and Outlook Our journey aiming at understanding the motion of extended test bodies in Kerr spacetime is about to end. Before closing definitively the discussion, let us briefly summarize the main outcomes of this thesis and the principal directions that remain to be explored. §.§.§ Summary of the main results The first part of this text was devoted to a review of the main features of geodesic motion in Kerr spacetime, including a discussion of Hamiltonian formulation and of classification of timelike geodesics. The geodesic equations are easily derived by solving the associated Hamilton-Jacobi equation, which turns out to be separable. They form a completely integrable system of equations, and the motion is characterized by four constants: the mass of the body, its energy and angular momentum and the Carter constant, the latter playing the role of the separation constant of the Hamilton-Jacobi problem. The integrable nature of motion allows to both (i) provide an action-angle formulation for radially bounded geodesic motion, thus making explicit its tri-periodic character and (ii) solve explicitly the equations of motion by quadratures. Moreover, radial and polar parts of the motion decouple one from the other, and can be solved independently. This allowed us to provide two original results: the classification of polar timelike geodesic motion in generic Kerr spacetime, and the one of radial motion for the near-horizon geodesics of high spin black holes. The next part was aimed at pedagogically reviewing standard results of the literature. We derived the MPD equations, which describe the motion of a finite size test body in curved spacetime, by modelling it as a worldline endowed with a collection of multipole moments accounting for its internal structure. This multipole expansion is consistent with an expansion of the motion in integer powers of the spin magnitude above a spinless, geodesic motion. Moreover, for bodies spinning at astrophysically realistic rates, it can be consistently truncated at any desired order. In this thesis, it has been chosen to work up to quadrupole order included, the presence of a quadrupole moment originating only from the proper rotation of the body. The explicit form of such a spin-induced quadrupole moment was explicitly derived. Moreover, it was shown that obtaining a closed system of equations required to enforced additional algebraic constraints, known as spin supplementary conditions. This amounts to specify the worldline which is chosen for defining the body's multipole structure. Several spin supplementary conditions have been discussed, and the so-called Tulczyjew-Dixon condition (TD SSC) was argued to be the best suited for the continuation of the work. Still tackling the problem in a perturbative spirit, the question of finding conserved quantities for the motion of extended test bodies was addressed both at linear and quadratic order in the (conserved) spin magnitude in the third part of the text. Energy and angular momentum can be deformed to obtain quantities which are conserved at any order of the multipole expansion. Mass is conserved at linear order, and a mass-like conserved quantity can still be defined at quadratic order. Extending the Carter constant to spinning bodies is more subtle. Its generalization at first order is due to Rüdiger, who also unrevealed the existence of another quantity, homogeneously linear in the spin. In Kerr spacetime, we demonstrated the uniqueness of this construction at first order in the spin. At quadratic order and under the spin-induced quadrupole approximation, it was shown that one can still find a generalized Carter-type constant and that the homogeneously linear invariant was still conserved provided that the quadrupole coupling has the value of the test body being itself a Kerr black hole. For other values of the coupling, we argued that it was impossible to find polynomial deformations of the known conserved quantities in Kerr spacetime. The last part of the thesis was aimed as depicting some aspects of the Hamiltonian description of test body motion in curved spacetime. We first reviewed a phase space a different coordinates systems (symplectic or not) adapted to a covariant Hamiltonian description of the problem. Expanding on the results known at pole-dipole level in the literature, we then presented the construction of a Hamiltonian reproducing the MPD equations endowed with a spin-induced quadru-pole moment under the TD spin supplementary condition, which is valid up to quadratic order in the spin magnitude. Two applications of this formalism to Kerr spacetime were finally discussed. First, we argued that the MPD equations were not anymore integrable in Kerr spacetime already at linear order, contrarily to the geodesic ones. Second, we reproduced Witzany's analysis, which showed that all the conserved quantities previously found were in direct one-to-one correspondence with the constants appearing in the (almost) separable solution to the Hamilton-Jacobi equation at first order in the spin magnitude. §.§.§ Outlook Various extensions of this work would be interesting to explore. Some of the main ones will be briefly detailed below. * Conserved quantities in Kerr spacetime At linear order in the spin magnitude, the occurrence of Einstein's tensor in the central identity (<ref>) that we derived suggests that Rüdiger's quadratic invariant will admit a generalization to the Kerr-Newmann spacetime which also admits a Killing-Yano tensor, once the Einstein tensor is replaced with the electromagnetic stress-energy tensor. This remains to be investigated. Moreover, even if we used the Tulczyjew-Dixon spin supplementary condition to derive the conserved quantities, the reasoning could be in principle translated to others choices of SSCs. At linear order in 𝒮, the transposition to MP and KS conditions is trivial, since the momentum and the four-velocity are related through Eq. (<ref>) for any of these three conditions. At quadratic order in 𝒮, the translation of our results from TD to other SSCs would in principle be facilitated by the formalism describing transitions between SSCs derived in <cit.>. It would also be enlightening to explore the relationship between the new constant 𝒬^(2)_BH for test (that is, non-backreacting) black holes in Kerr spacetime and those for arbitrary-mass-ratio binary black holes at second-post-Newtonian order (including the spin-induced quadrupole effects) found by Tanay et al. in Ref. <cit.>, ensuring the integrability of the system at that order. Finally, the covariant building blocks formalism introduced in the derivation may be applied for solving others problems in Kerr spacetime. It is very powerful, since it turns all differential identities into algebraic ones, and consequently allows for a purely enumerative approach for solving the original problems. Moreover, if the structure of the 4-dimensional Kerr covariant building blocks can be generalized to higher dimensions and more generic spacetimes, it could bring new insights on hidden symmetries of such spacetimes <cit.>. * Higher multipoles One could wonder whether deformations of the conserved quantities that have studied in this thesis still exist at higher orders in the multipole expansion for black hole-type couplings. The structure of the spin-induced octupole moment has already been well established <cit.>, but the very form of higher order spin-induced moments as well as the corresponding equations of motion remains unclear <cit.>. The assumption of the existence of deformations of the present quadrupole invariants could potentially serves as a guide for investigating test black holes motion at higher orders. * Hamilton and Hamilton-Jacobi formulations The main direction concerning the Hamiltonian sector of this thesis would be to prove (or invalidate) Conjecture <ref>, which would enable to understand the link between the existence of the quasi-conserved quantities and the swing-region separability of the associated Hamilton-Jacobi equation at second order in the spin magnitude for black hole coupling, thus pushing the analysis of Witzany <cit.> to the next order in the multipole expansion. This would also enable to compute the corresponding shifts in the fundamental frequencies of the action-angle variables description of the finite size particle, which are of direct relevance for modeling EMRIs involving spinning secondaries. The (marginal) appearance of chaos in the motion at linear order in 𝒮 was suggested in <cit.>, but ruled out by the conclusions of <cit.>. Notice that a direct comparison between analytical results and the outcomes of numerical investigations <cit.> still deserves caution due to the actual hypotheses under which the analysis is carried out. Moreover, Witzany's solution to Hamilton-Jacobi equation <cit.> did not allow for a separation of the orbital equations of motion at linear order in 𝒮. These results are consistent with our findings that Liouville integrability does not hold due to a single non-vanishing Poisson bracket between the two Rüdiger quasi-invariants, even though the explicit relationship between these statements remains to be deepened. A more careful investigation using a symplectic Hamiltonian formulation (in the spirit of the one of P. Ramond for Schwarzschild spacetime <cit.>) of the motion in Kerr at linear order in 𝒮 would enable to better understand the breaking of the integrability, as well as provide more definitive answers regarding the relationships between non-integrability, structure of the Hamilton-Jacobi equation solution, and the (non-)appearance of chaos. Here ends this thesis. We have been pleased to share a part of the enthusiasm we have experienced while digging into the entangled but fascinating world of Kerr black holes, still impressed by the possibility of being able to carry out so far exact analytic computations. We sincerely hope that the attentive reader may have found, well hidden behind all the formal statements, some insightful shards of beauty. * * plain tocpartAppendices PART: *Appendices CHAPTER: ELLIPTIC INTEGRALS AND JACOBI FUNCTIONS In this appendix, we set our conventions for the elliptic integrals and Jacobi functions used in the main text, following <cit.>. The incomplete elliptic integrals of the first, second, and third kind are defined as F(φ , m) ≜∫_0^φdθ/√(1- m sin^2 θ) = ∫_0^sinφd t/√((1-t^2)(1-m t^2)), E(φ , m) ≜∫_0^φ dθ√(1- m sin^2θ)= ∫_0^sinφd t √(1-m t^2/1-t^2), Π(n, φ ,m) ≜∫_0^φ1/1- n sin^2 θdθ/√(1- m sin^2 θ) = ∫_0^sinφ1/1- n t^2d t/√((1-m t^2)(1-t^2)), respectively. We also define E'(φ, m ) = ∂_m E(φ,m) = 1/2m [E(φ,m) - F(φ , m)]. The complete elliptic integrals of the first, second, and third kind are defined as K(m) ≜ F(π/2,m), E(m) ≜ E(π/2,m), Π(n,m) ≜Π(n,π/2,m), respectively, and E'(m)=∂_m E(m). Jacobi functions are defined as the inverse of the incomplete elliptic integrals of the first kind. More precisely, one can invert u = F(φ,m) into φ =am(u,m) in the interval -K(m) ≤ u ≤ K(m). The elliptic sinus, elliptic cosinus, and delta amplitude are defined as sn(u,m) ≜sinam(u,m), cn(u,m) ≜cosam(u,m), dn(u,m) ≜√(1-m (sinam(u,m))^2), respectively. They have the periodicity (k,l ∈ℤ) sn(u + 2 k K(m)+2ilK(1-m) ,m) = (-1)^k sn(u,m) , dn(u + 2 k K(m)+2ilK(1-m) ,m) = (-1)^l sn(u,m) and obey the properties cn^2(u,m)+sn^2(u,m) =1 , dn^2(u,m)+m sn^2(u,m) =1, ∂/∂ uam(u,m) =dn(u,m), ∂/∂ usn(u,m) =cn(u,m)dn(u,m), ∂/∂ ucn(u,m) = - sn(u,m)dn(u,m), ∂/∂ udn(u,m) = - m sn(u,m)cn(u,m). CHAPTER: EXPLICIT SOLUTIONS TO KERR, NHEK AND NEAR-NHEK GEODESIC EQUATIONS: TECHNICAL DETAILS Geodesics: technical details § ELEMENTARY POLAR INTEGRALS For the pendular and equator-attractive cases, one has to compute the following integrals: Î^(0)(x) ≜∫_x_0^x t/√(Θ(t^2)), Î^(1)(x)≜∫_x_0^xt^2 t/√(Θ(t^2)) Î^(2)(x) ≜∫_x_0^x t/√(Θ(t^2))1/1-t^2 where -1 ≤ x ≤ 1. In the main text, x will be substituted by cosθ, where θ is a polar angle. §.§ ϵ_0=0 In this case, Θ(t^2)=√(z_0/Q)(z_0-t^2). Choosing x_0=0, one finds directly Î^(0)(x) =√(z_0/Q)arcsinx/√(z_0), Î^(1)(x) =1/2√(z_0/Q)[z_0arcsinx/√(z_0)-x√(z_0-x^2)], Î^(2)(x) =√(z_0/Q(1-z_0))arcsin√(x/z_01-z_0/1-x) and the particular values Î^(0)(√(z_0)) =π/2√(z_0/Q), Î^(1)(√(z_0)) =π/4√(z_0/Q), Î^(2)(√(z_0)) =π/2√(z_0/Q(1-z_0)). One inverts (<ref>) as x=√(z_0)sin(Q/z_0Î^(0)). §.§ ϵ_0≠0, z_-≠0 In this case, Θ(t^2)=ϵ_0(t^2-z_-)(z_+-t^2), where t=cosθ. Instead of solving separately the pendular and vortical cases as in Ref. <cit.>, we will introduce a formal notation enabling us to treat both cases simultaneously. Let us define q≜ sign Q, z_±1≜ z_±, m≜{[ z_+/z_-, q=+1; 1-z_-/z_+, q=-1 ]. and y(t)≜{[ t/√(z_+), q=+1; sign t √(z_+-t^2/z_+-z_-), q=-1 ]., Ψ^q(x)≜arcsiny(x). Pendular motion corresponds to q = 1, which has 0 ≤ t^2 ≤ z_+ <1 and ϵ_0 z_- <0, while vortical motion corresponds to q = -1, which has 0< z_- ≤ t^2 ≤ z_+ <1 and ϵ_0 >0. One can then rewrite t/√(ϵ_0(t^2-z_-)(z_+-t^2))=q/√(-qϵ_0z_-q) y/√((1-y^2)(1-my^2)). Both factors of the right side of this equation are real, either for pendular or for vortical motions. The lower bound of the integral will be chosen as x_0=0 for Q≥0 and x_0= sign x√(z_+) for Q<0. Then y(x_0)=0 for both values of q. This allows us to solve directly the integrals in terms of elliptic integrals (see Appendix <ref> for definitions and conventions used): Î^(0)(x) =q/√(-qϵ_0z_-q)F(Ψ^q(x),m), Î^(1)(x) ={[ -2z_+/√(-ϵ_0 z_-)E'(Ψ^+(x),m), q=+1; - √(z_+/ϵ_0)E(Ψ^-(x),m), q=-1 ]., Î^(2)(x) ={[ 1/√(-ϵ_0 z_-)Π(z_+,Ψ^+(x),m), q=+1,; -1/(1-z_+)√(ϵ_0 z_+)Π(z_–z_+/1-z_+,Ψ^-(x),m), q=-1 ]. . For x=√(z_q), Ψ^q(√(z_q))=π/2 for both q= ± 1 and the incomplete elliptic integrals are replaced by complete ones: Î^(0)(√(z_q)) =q/√(-qϵ_0z_-q)K(m), Î^(1)(√(z_q)) ={[ -2z_+/√(-ϵ_0 z_-)E'(m), q=+1; - √(z_+/ϵ_0)E(m), q=-1 ]., Î^(2)(√(z_q)) ={[ 1/√(-ϵ_0 z_-)Π(z_+,m), q=+1,; -1/(1-z_+)√(ϵ_0 z_+)Π(z_–z_+/1-z_+,m), q=-1 ]. . For q=-1, all integrals (<ref>), (<ref>), and (<ref>) vanish when evaluated at x=±√(z_+) because Ψ^-(√(z_+))=0. Finally, (<ref>) can be inverted as x=y^-1( sn (√(-qϵ_0z_-q)Î^(0),m)) with y^-1 the inverse function of y: y^-1(t)={[ √(z_+)t, q=+1; sign t√(z_+(1-mt^2)), q=-1 ]. leading to the explicit formula x={[ √(z_+) sn (√(-ϵ_0z_-)Î^(0),m), q=+1; sign x √(z_+) dn (√(ϵ_0z_+)Î^(0),m), q=-1. ]. In the context of this paper, we will always have x=cosθ. §.§ ϵ_0>0, z_-=0 This case is relevant for equator-attractive orbits. In this case, the potential reduces to Θ(t^2)=ϵ_0 t^2(z_+-t^2). One needs the following integrals (see also Ref. <cit.>): ℐ̂^(0) ≜∫_x^√(z_+) t/√(Θ(t^2))=1/√(ϵ_0z_+) arctanh √(1-x^2/z_+), ℐ̂^(1) ≜∫_x^√(z_+)t^2 t/√(Θ(t^2))=√(z_+-x^2/ϵ_0), ℐ̂^(2) ≜∫_x^√(z_+) t/√(Θ(t^2))(1/1-t^2-1)=1/√(ϵ_0(1-z_+))arctan√(z_+-x^2/1-z_+) for any x≤√(z_+). All such integrals are obviously vanishing when evaluated at x=√(z_+). Note that ℐ̂^(2) contains a -1 integrand for simplicity of the final answer. § EXPLICIT FORM OF PROGRADE NHEK GEODESICS We provide here the explicit motion in Mino time and the parametrized form of all classes of future-oriented geodesics in NHEK in the region outside the horizon R > 0. Without loss of generality, we can use the existence of the ⇄ and [origin=c]-45⇄-flips to restrict ourselves to the subclasses of NHEK future-oriented geodesics, determined as * Prograde (ℓ≥ 0) geodesics; * Partially ingoing geodesics, i.e., trajectories whose radial coordinate is decreasing on at least part of the total motion. Because the near-horizon motion admits at most one turning point, we will only encounter spherical, plunging, marginal, bounded and deflecting motions. Generic integrals. One must provide explicit solutions to equations (<ref>) to (<ref>). The main point one has to deal with consists of solving the radial integrals (<ref>). Notice that the primitives 𝕋^(i)(R)=∫ R/R^i√(E^2+2Eℓ R-𝒞R^2), i=0,1,2 can be directly integrated as 𝕋^(0)(R) ={[ 1/√(-𝒞)log[Eℓ-𝒞 R+√(-𝒞)√(v_R(R))], 𝒞≠ 0; √(E^2+2Eℓ_*R)/Eℓ_*, 𝒞=0, E≠ 0, ]. 𝕋^(1)(R) =log R-log[E+ℓ R+√(v_R(R))]/E E≠ 0, 𝕋^(2)(R) =-1/E(√(v_R(R))/ER+ℓ 𝕋^(1)(R)), E≠ 0. We treat each type of motion cited above separately. We consider a geodesic path linking two events as described in the main text. We consider here a future-oriented path, Δ T>0 and Δλ>0. Let us proceed systematically: * Spherical motion has R constant, and the radial integrals are ill defined and irrelevant. One can directly integrate the basic geodesic equations in this case. * For plunging motion, we will fix the final conditions at the final event (T_f,R_f, θ_f,Φ_f) such that R_f<0 (We will choose R_f to be a root of the radial potential in all cases where they are real. Otherwise, we will chose it for convenience.) We will drop the subscript i of the initial event. We denote here Δλ=λ_f-λ, Δ T=T_f-T(λ),…and T^(j)_R(R)=𝕋^(j)(R)-𝕋^(j)(R_f), j=0,1,2. * For bounded motion, we identify the physically relevant root of the radial potential R_turn>0 at T_turn (which is an integration constant) that represents the turning point of the motion. We consider the motion either before or after the turning point: * T(λ)>T_turn: We choose the initial event as the turning point and the final one as (T_>(λ),R_>(λ), θ_>(λ),Φ_>(λ)). This leads to T^(j)_R(R_>)=𝕋^(j)(R_turn)-𝕋^(j)(R_>), j=0,1,2. * T(λ)<T_turn: We choose the initial event as (T_<(λ),R_<(λ),θ_<(λ), Φ_<(λ)) and the final event as the turning point. This leads to T^(j)_R(R_<)=𝕋^(j)(R_turn)-𝕋^(j)(R_<), j=0,1,2. The motion being symmetric in R with respect to R_turn, one must only determine T_>(R) and Φ_>(R), because T_<(R)=-T_>(R), Φ_<(R)=-Φ_>(R). * For deflecting motion, there is also a turning point and (<ref>) remains true. This case is treated similarly to the bounded case but with a minus-sign change. We get here T_R^(j)(R_>) =𝕋^(j)(R_>)-𝕋^(j)(R_turn), T^(j)_R(R_<) =𝕋^(j)(R_<)-𝕋^(j)(R_turn). * Finally, for marginal motion, there is no turning point and the integral is immediate. In what follows, for the motions with one turning point, we will only make explicit the motion after the turning point, T>T_turn, and we will denote T=T_> and Φ=Φ_> in order to simplify the notations. Spherical_* (ISSO). The explicit form is T(λ) = T_0 + ℓ_*/R_0(λ-λ_0) , R(λ) = R_0 , Φ(λ) =Φ_0 - 3/4ℓ_* (λ-λ_0) + ℓ_* Φ_θ(λ-λ_0). The parametrized form is R = R_0 , Φ = Φ_0 -3/4R_0 (T-T_0) + ℓ_* Φ_θ( R_0/ℓ_* (T-T_0)). Marginal(ℓ). The Casimir obeys here 𝒞<0; we denote q≜√(-𝒞) and consider the initial condition R(λ_m)=R_m. The explicit form of the solution is given (as a function of the Mino time) by T(λ) = T_0+ℓ/R_m qexp(q(λ-λ_m)), R(λ) = R_m exp(-q(λ-λ_m)) , Φ(λ) = Φ_0 -3/4ℓ(λ-λ_m)+ℓΦ_θ(λ-λ_m). The parametrized form is T(R) = T_0+ℓ/q R, Φ(R) = Φ_0+3/4ℓ/qlogR/R_m+ℓΦ_θ(λ(R/R_m)). Here, the constants T_0 and Φ_0 remain arbitrary. Plunging_*(E). The energy satisfies E>0 and the initial condition is R(λ_0)=R_0=-E/2ℓ_*, leading to λ(R) -λ_0= - 1/ℓ_*√(1+2 ℓ_* R/E). The explicit form is T(λ) = 2 ℓ_*^2 (λ - λ_0)/E(1-ℓ_*^2 (λ-λ_0)^2), R(λ) = E/2ℓ_*( ℓ_*^2 (λ-λ_0)^2 - 1 ) , Φ(λ) = -3/4ℓ_* (λ-λ_0) +2 arctanh (ℓ_* (λ - λ_0)) + ℓ_* Φ_θ(λ-λ_0) . The parametrized form is T(R) = 1/R√(1+2 ℓ_* R/E), Φ(R) = 3/4√(1+2 ℓ_* R/E)-2 arctanh √(1+2 ℓ_* R/E)+ℓ_* Φ_θ (λ(R)-λ_0). Plunging(E,ℓ). The parameters obey 𝒞 < 0 and E>0. We denote q = √(-𝒞) and λ_- the value of the Mino time such that R(λ_-)= R_-. The orbit is given by λ(R) - λ_- = 1/qlog( E √(𝒞+ℓ^2)/E ℓ - 𝒞 R +q √(v_R))=i/qarccos(Eℓ-𝒞R/E√(𝒞+ℓ^2)). The last form is not explicitly real, but it allows us to find easily T(λ) = q/Esinh (q(λ-λ_-))/ℓ/√(𝒞 + ℓ^2)-cosh (q(λ-λ_-)), R(λ) = E/𝒞( ℓ - √(𝒞 + ℓ^2)cosh (q (λ - λ_-))), Φ(λ) = - 3 ℓ/4(λ-λ_-) +2 arctanh ( ℓ + √(𝒞 + ℓ^2)/qtanh (q/2(λ-λ_-))) +ℓ Φ_θ(λ-λ_-) . The parametrized form can be simplified as T(R) = √(v_R)/E R, Φ(R) = Φ_0-logE+ℓ R+√(v_R(R))/R  +3ℓ/4qlog(Eℓ-𝒞R+q√(v_R(R)))+ℓΦ_θ(λ(R)-λ_-) where Φ_0≜logE+ℓ R_-/R_--3ℓ/4qlog(Eℓ-𝒞 R_-). The orbit start at R=+∞ at λ=-∞ and reaches the black hole horizon R=0 at λ = λ_- - λ_H, where λ_H ≡1/qarccosh(ℓ/√(𝒞 + ℓ^2)). It never reaches R_- < 0. Bounded_<(E,ℓ). We have 𝒞 > 0 and E>0, leading to λ(R) -λ_+ = 1/qarccos( 𝒞 R - E ℓ/E √(𝒞+ℓ^2)). We normalized Mino time such that R(λ_+) = R_+ and denoted q ≜√(𝒞). The orbit starts at λ = λ_+ at the turning point R_+ and plunges inside the black hole R=0 at λ = λ_++1/qarccos(-ℓ/√(𝒞+ℓ^2)). We have T (λ) = q/Esin (q(λ-λ_+))/ℓ/√(𝒞 + ℓ^2)+cos (q(λ-λ_+)) , R(λ) = E/𝒞( ℓ + √(𝒞 + ℓ^2)cos (q (λ-λ_+) )) , Φ(λ) = - 3 ℓ/4(λ-λ_+) +2 arctanh ( ℓ - √(𝒞 + ℓ^2)/qtan (q/2(λ-λ_+))) +ℓΦ_θ(λ-λ_+). The parametrized form is given by (<ref>) and (<ref>), but with q→ iq. One can write a manifestly real form of the azimutal coordinate by shifting the initial value Φ_0, leading to Φ(R)=Φ_0'-logE+ℓ R+√(v_R(R))/R+3ℓ/4qarctanq√(v_R)/Eℓ-𝒞 R+ℓΦ_θ(λ(R)-λ_-) with Φ'_0≜logE+ℓ R_-/R_-. Deflecting(E,ℓ). We have 𝒞 < 0 and E<0, leading to (q≜√(-𝒞)) λ(R)-λ_+=-i/qarccosEℓ-𝒞R/E√(𝒞+ℓ^2). The initial condition R(λ_+)=R_+ corresponds to the minimal radius reached by the trajectory. The orbit starts and ends at R=+∞ at Mino time λ=±∞. We have T(λ) =q/Esinhq(λ-λ_+)/ℓ/√(𝒞+ℓ^2)-coshq(λ-λ_+) R(λ) =E/𝒞(ℓ-√(𝒞+ℓ^2)coshq(λ-λ_+)) Φ(λ) =-3ℓ/4(λ-λ_+)+2 arctanh ( ℓ - √(𝒞 + ℓ^2)/qtanh (q/2(λ-λ_+))) +ℓΦ_θ(λ-λ_+). We finally have the parametrized form T(R) = - √(v_R(R))/E R, Φ(R) = Φ_0+logE+ℓ R+√(v_R(R))/R  -3ℓ/4qlog(Eℓ-𝒞R+q√(v_R(R)))+ℓΦ_θ(λ(R)-λ_+) with Φ_0≜-logE+ℓ R_+/R_++3ℓ/4qlog(Eℓ-𝒞R_+). § EXPLICIT FORM OF PROGRADE NEAR-NHEK GEODESICS We follow the same procedure as the one in Appendix <ref>. As before, we only focus, without loss of generality, on future-oriented partially ingoing prograde orbits. For convenience, we also include one class of retrograde bounded geodesics. Spherical(ℓ). The explicit form reads as R =R_0=κℓ/√(-𝒞), t(λ) =t_0+ℓ/R_0(λ-λ_0), ϕ(λ) =ϕ_0-3/4ℓ(λ-λ_0)+ℓΦ_θ(λ-λ_0). The parametrized form is R =R_0=κℓ/√(-𝒞), ϕ(t) =ϕ_0-3/4R_0(t-t_0)+ℓΦ_θ(λ-λ_0). Note that R_0≥(2/√(3)-1)κ. Plunging_*. One has λ-λ_i=-R-R_i/κℓ_*. The explicit form is R(λ) =R_i-κℓ_*(λ-λ_i), t(λ) =-1/2κlog[1+κℓ_*(λ-λ_i)κℓ_*(λ-λ_i)-2R_i/R_i^2-κ^2], ϕ(λ) =-3/4ℓ_*(λ-λ_i)+1/2log1-κℓ_*(λ-λ_i)/R_i-κ/1-κℓ_*(λ-λ_i)/R_i+κ+ℓ_*Φ_θ(λ-λ_i). The parametrized form is t(R) =-1/2κlogR^2-κ^2/R_i^2-κ^2, ϕ(R) = ϕ_i+3/4κR+1/2logR-κ/R+κ+ℓ_*Φ_θ(λ(R)-λ_i) with ϕ_i≜ -3R_i/4κ-1/2logR_i-κ/R_i+κ. The geodesic starts from R=+∞ at λ=-∞ and reaches the horizon at Mino time λ_H=λ_i+1/ℓ_*(R_i/κ-1). Bounded_*(e) and Plunging_*(e). The orbital parameters satisfy 𝒞=0 and e<0 (bounded) or e>0 (plunging). The potential is simply v_R;κ(R)=e^2+κ^2ℓ_*^2+2eℓ_* R and the initial conditions are imposed at R(λ_0)=R_0, where R_0 is the (unique) root of the radial potential. One directly obtains λ(R)-λ_0=-√(v_R;κ(R))/eℓ_*, leading to R(λ) =R_0+eℓ_*/2(λ-λ_0)^2, t(λ) = -1/κ arctanh 2κ e ℓ_*^2(λ-λ_0)/κ^2ℓ_*^2+e^2[ℓ^2_*(λ-λ_0)^2-1], ϕ(λ) =-3/4ℓ_*(λ-λ_0)+ sign e arctanh 2e^2ℓ_*(λ-λ_0)/-κ^2ℓ_*^2+e^2[ℓ_*^2(λ-λ_0)^2+1]  +ℓ_*Φ_θ(λ-λ_0). The parametric form is t(R) =1/κ arccoshR+κ^2ℓ_*/e/√(R^2-κ^2), ϕ(R) =-3/4e√(v_R;κ(R))+ arctanh √(v_R;κ(R))/e+ℓ_* R+ℓ_*Φ_θ(λ(R)-λ_0). Notice that the requirements v_R;κ≥0 and R≥κ are sufficient to guarantee the reality of the inverse hyperbolic functions involved. The trajectory reaches the horizon at Mino time λ_H=λ_0- sign e √(2(κ-R_0)/eℓ_*), which is smaller than λ_0 for plunging motion and greater for bounded motion, as expected. (Retrograde) Bounded_>(e,ℓ). The geodesic parameters satisfy 𝒞<0, ℓ < 0, and e>0. Therefore, R_- is positive, and we choose the initial condition as R(λ_-)=R_-. Defining q≜√(-𝒞), the explicit form reads as R(λ) = 1/𝒞[eℓ-√((𝒞+ℓ^2)(e^2+κ^2𝒞))coshq(λ-λ_-)]. The parametrized form is t(R) =1/4κlogF_+(R)/F_-(R), ϕ(R) = 3ℓ/4qlog[eℓ-𝒞R+q√(v_R;κ(R))]-1/4logG_+(R)/G_-(R)+ℓΦ_θ(λ(R)-λ_-) where we define F_±(R) ≜[eR+κ(κℓ±√(v_R;κ(R)))]^2, G_±(R) ≜(e+ℓ R±√(v_R;κ(R)))^2. Note that using the identities F_+(R)F_-(R) =(e^2+κ^2𝒞)^2(R^2-κ^2)^2, G_+(R)G_-(R) =(𝒞+ℓ^2)^2(R^2-κ^2)^2, one can rewrite (<ref>) and (<ref>) as t(R) =-1/2κlogF_+(R)/(e^2+κ^2𝒞)(R^2-κ^2), ϕ(R) = 3ℓ/4qlog[eℓ-𝒞R+q√(v_R;κ(R))] -1/2logG_+(R)/(𝒞+ℓ^2)(R^2-κ^2)+ℓΦ_θ(λ(R)-λ_-). The geodesic motion starts from the past horizon, reaches R_- at Mino time λ_- and crosses the future horizon at λ_H-λ_-=1/qarccosheℓ-κ𝒞/√((𝒞+ℓ^2)(e^2+κ^2𝒞)). Bounded_<(e,ℓ). The parameters obey 𝒞>0 and e≠0; therefore, R_+ is positive and the initial condition is chosen as R(λ_+)=R_+. One defines q≜√(𝒞) and gets the explicit form: R(λ) = 1/𝒞[eℓ+√((𝒞+ℓ^2)(e^2+κ^2𝒞))cos(q(λ-λ_+))]. The parametrized form is given by (<ref>) and (<ref>), with the replacement rule q→ iq. A manifestly real form of ϕ is ϕ(R) =3ℓ/4qarctanq√(v_R;κ)/eℓ-𝒞 R-1/2logG_+(R)/(𝒞+ℓ^2)(R^2-κ^2)+ℓΦ_θ(λ(R)-λ_-). The geodesic motion starts from the white hole past horizon, reaches R_+ at Mino time λ_+ and crosses the future horizon at λ_H-λ_+=1/q arccosκ𝒞-eℓ/√((𝒞+ℓ^2)(e^2+κ^2𝒞)). Plunging(e,ℓ). The parameters satisfy 𝒞<0 and e>-κ q, where q≜√(-𝒞). The roots of the radial potential are consequently either complex or negative. In the complex case (e^2+κ^2𝒞 < 0), we define the real quantity R_f ≜1/𝒞[eℓ-√(-(𝒞+ℓ^2)(e^2+κ^2𝒞))] and impose the final condition R(λ_f)=R_f, leading to R(λ) = 1/𝒞[eℓ-√(-( 𝒞+ℓ^2)(e^2+κ^2𝒞))×(coshq(λ-λ_f)-√(2)sinhq(λ-λ_f))]. Note that lim_x→±∞cosh x -√(2)sinh x=∓∞ leads to the expected behavior. For negative roots, R=R_- can be used as the final condition. In both cases, the parametrized form is again as given in (<ref>) and (<ref>). The orbit starts from R=+∞ at λ=-∞ and reaches the horizon at Mino time λ_H-λ_f=1/q arcsinh 1+logeℓ-κ𝒞+q((e+κℓ)/√(-(𝒞+ℓ^2)(e^2+κ^2𝒞)). Deflecting(e,ℓ). One has 𝒞<0 and e<0. Choosing the initial condition as R(λ_-)=R_- and defining again q≜√(-𝒞), we get R(λ) =1/𝒞[eℓ-√((𝒞+ℓ^2)(e^2+κ^2𝒞))cosh(q(λ-λ_-))]. The parametrized form is t(R) =-1/4κlogF_+(R)/F_-(R), ϕ(R) =- 3ℓ/4qlog[eℓ-𝒞R+q√(v_R;κ(R))] +1/4logG_+(R)/G_-(R)+ℓΦ_θ(λ(R)-λ_-). The orbit starts from R=+∞ at λ=-∞, reaches its minimal radial value R_- at Mino time λ_-, and goes back to the asymptotic region at λ→+∞. CHAPTER: RESULTS FOR THE SECOND-ORDER QUADRATIC CONSERVED QUANTITIES Second-order conserved quantities § DERIVATION OF THE QUADRATIC QUANTITY CONSTRAINT The aim of this appendix is to provide a derivation of the reduced expression (<ref>) for the constraint of grading [2,3] for the quadratic conserved quantity (<ref>). Let us denote by f_(n) the terms contained in f that are homogeneous of order 𝒪(𝒮^n). We now take for granted the validity and uniqueness of the solution (<ref>). The 𝒪(𝒮^2) constraint equation takes the form 𝒬̇_(2)=𝒬̇_R_(2)+𝒬̇^quad_(2)!=0. We will compute these two contributions separately. §.§ Terms coming from 𝒬^quad Using Eqs. (<ref>), (<ref>) and (<ref>), the variation of 𝒬^quad along the trajectory is given by 𝒬̇^quad =D/τM_αβγδS^αβS^γδ+2DS^αβ/τM_αβγδS^γδ =v^λ(∇_λ M_αβγδ S^αβS^γδ+4M_αβγλS^αβp^γ)+𝒪(𝒮^3) =p̂^λ(∇_λ M_αβγδ S^αβS^γδ+4M_αβγλS^αβp^γ)+𝒪(𝒮^3) = ∇_λ M_αβγδp̂^λ S^αβS^γδ+𝒪(𝒮^3). Recalling that S^αβ=2s^[αp̂^β]*, we obtain the following equation in terms of the independent variables s_α and p^μ: 𝒬̇^quad_(2) =4 ∇_μανβρs_α s_βp̂^μp̂^νp̂^ρ =4 ∇_μ N_ανβρs^α s^βp̂^μp̂^νp̂^ρ. §.§ Terms coming from 𝒬_R One can perform the splitting 𝒬̇_R_(2) =𝒬̇^MD_R_(2)+𝒬̇^Q_R_(2). Here, the “monopole-dipole” terms 𝒬̇_R^MD are those that where already present in <cit.>: 𝒬̇_R^MD ≜μ^2 Dλρ∇_λ K_μνp̂^μp̂^νp̂^ρ-μ/2L_μνρS^μνRραβγp̂^α S^βγ +2μ^2 L_μλρDλνp̂^μp̂^ν p^ρ =4 μαμνβρ s_α s_βp̂^μp̂^νp̂^ρ +O(𝒮^3). where (see Eq. (73) of <cit.>) _αβγδϵ=-1/2 ^*L_αβλR*λγδϵ. The relaxed spin vector s^α is defined from S^α≜Π^α_β s^β where the part of 𝐬 aligned with 𝐩 is left arbitrary, but is assumed (without loss of generality) to be of the same order of magnitude. The tensor L_αβγ is defined in Eq. (<ref>). In order to simplify Eq. (<ref>), we have on the one hand ^*ϵ_αβλρ∇^ρ𝒵=-2g_λ[α∇_β]𝒵. On the other hand, ∇_[αK_β]*λ =2Y_λ[αξ_β]+3g_λ[αYρβξ_ρ] =3Y_λ[αξ_β]+g_λ[α∇_β]𝒵, where ξ^α=-1/3∇_λ Y^*λα is the timelike Killing vector associated with the Killing-Yano tensor. Gathering these pieces together, we obtain the reduced expression _αβγδϵs^α s^βp̂^μp̂^νp̂^ρ=-[R*λγδϵY_λ[αξ_β]+∇_[α𝒵 R^*_β]γδϵ]s^α s^βp̂^μp̂^νp̂^ρ. The monopole-dipole piece is consequently given by 𝒬̇^MD_R_(2) =( 2(Y_λμξ_α-Y_λαξ_μ)R*λνβρ-2∇_μ𝒵 R^*_ναβρ) s^α s^βp̂^μp̂^νp̂^ρ. The “quadrupolar” terms 𝒬̇_R^Q are the ones induced by the presence of the quadru-pole, namely 𝒬̇_R^Q ≜ 2μ K_μνp̂^μℱ^ν-μℒλρ∇_λ K_μνp̂^μp̂^νp̂^ρ -2μ L_μλρℒλνp̂^μp̂^νp̂^ρ+μ L_μνρℒ^μνp̂^ρ. Considering only the spin-induced quadrupole (<ref>) and making use of the identities (<ref>) to (<ref>) as well as the explicit form of the tensor L_μνρ (<ref>), we obtain the reduced expression 𝒬̇^Q_R_(2) =κ[K_μλ∇^λ R_ναβρ-∇_λ K_μν Rλαβρ -4/3∇_[αK_λ]ν Rλμβρ -8/3ϵ_αγρλ∇^λ𝒵 Rγμβν]Θ^αβp̂^μp̂^νp̂^ρ. Thanks to the orthogonality condition p_α S^α=0, we can substitute Θ^αβ in this expression with θ^αβ defined as θ^αβ = Π^αβ𝒮^2 -s^α s^β . After a few algebraic manipulations and making use of Bianchi identities, we obtain 𝒬̇^Q_R_(2)=κ[ ( ∇_ν(K_μλRλαβρ)+∇_ν K_μλRλαβρ-(ν↔α)) +4/3∇_[α K_μ ] λRλνβρ-16/3∇_λ𝒵 g_μ[ν ^*Rλα]βρ+8/3∇_μ𝒵 ^*R_ναβρ]θ^αβp̂^μp̂^νp̂^ρ. It is possible to further reduce this expression. It is useful to first derive some properties of the dualizations of Riemann tensor in Ricci-flat spacetimes. For any tensor M_abcd with the symmetries of the Riemann tensor, one has αβμν =-6δ^αβ ab_[μν cd]Mabcd. In Ricci-flat spacetimes, this yields for the Riemann tensor, _αβγδ=-R_αβγδ. Moreover, dualizing this equation one more time gives rise to the identity R^*_αβγδ=^* R_αβγδ. Using R_αβγδ=R_γδαβ, we also deduce R^*_αβγδ=R^*_γδαβ. Notice that we also have the identity ∇^λ R_λαβγ=0. Making use of the properties of the Riemann tensor and of the identity ^*R_λαβρg^αβ=0, we get the additional relations R_λαβρθ^αβp̂^ρ = -R_λαβρs^α s^βp̂^ρ, ^*R_λαβρθ^αβp̂^ρ = -^*R_λαβρs^α s^βp̂^ρ, R_λνβρθ^αβp̂^νp̂^ρ = R_λνβρ(g^αβ𝒮^2-s^α s^β) p̂^νp̂^ρ. This allows to express 𝒬̇^Q_R_(2) in terms of the independent variables as 𝒬̇^Q_R_(2) =-κ[( ∇_ν(K_μλRλαβρ)+∇_ν K_μλRλαβρ-(ν↔α)) +4/3∇_[α K_μ ] λRλνβρ+(4/3∇_σ K_μλRλνσρ+2/3∇_μ K_σλRλνσρ)g_αβ +16/3∇_λ𝒵 g_μ[ν ^*Rλα]βρ-8/3∇_μ𝒵 ^*R_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ. The quantity into brackets appearing in the second line of this expression is actually vanishing: (4/3∇_σ K_μλRλνσρ+2/3∇_μ K_σλRλνσρ)p̂^νp̂^ρ =2/3(2∇_σ K_μλ+∇_μ K_σλ)Rλνσρp̂^νp̂^ρ =2/3(∇_σ K_μλ+∇_λ K_σμ+∇_μ K_λσ)Rλνσρp̂^νp̂^ρ=0, which follows from the definition of Killing tensors. §.§ Reduced expression We can now write down the complete expression 𝒬̇^(2)=𝒬̇_R^MD+𝒬̇_R^Q+𝒬̇^quad. Gathering all the pieces (<ref>), (<ref>), (<ref>), we get 𝒬̇^(2) =[κ∇_α(K_μλRλνβρ)+∇_ν(4 N_αμβρ -κ K_μλRλαβρ)-κ∇_μ K_νλRλαβρ +2κ/3∇_[μK_λ]αRλνβρ+2(Y_λμξ_α-Y_λαξ_μ)R*λνβρ-16κ/3∇_λ𝒵 g_μ[ν R*λα]βρ +2(4κ/3-1)∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ+𝒪( 𝒮^3). In order to further simplify this expression we first derive some additional useful identities. One has ∇_μ K_νλRλαβρp̂^μp̂^ν=Yσμ∇_λ Y_σν Rλαβρp̂^μp̂^ν. We obtain the relation ∇_μ K_νλRλαβρp̂^μp̂^ν=[(Y_λμξ_α-Y_αμξ_λ) ^*Rλνβρ-g_ανY_λμξ_κ ^*Rλκβρ]p̂^μp̂^ν. In a similar fashion, one can prove that ∇_[μK_λ]αRλνβρp̂^μp̂^ν=[3/2(Y_αμξ_λ+Y_λαξ_μ) ^*Rλνβρ-3/2g_μν Y_λαξ_κ ^*Rλκβρ +1/2g_αμ∇_λ𝒵 ^*Rλνβρ-1/2g_μν∇_λ𝒵 ^*Rλαβρ-1/2∇_μ𝒵 ^*R_ανβρ]p̂^μp̂^ν. When the dust settles, we are left with 𝒬̇^(2) =[4∇_μ N_ανβρ+2κ∇_[αℳ^(1)_|μ|ν]βρ +κ(g_αμY_λν-g_μνY_λα)ξ_κ ^*Rλκβρ +(2κ Y_αμξ_λ+(2-κ)(Y_λμξ_α+Y_αλξ_μ)+3κg_αμ∇_λ𝒵) ^*Rλνβρ -3κ g_μν∇_λ𝒵 ^*Rλαβρ+(3κ-2)∇_μ𝒵 R^*_ναβρ]s^α s^βp̂^μp̂^νp̂^ρ +𝒪( 𝒮^3), where ℳ^(1)_αβγδ≜ K_αλRλβγδ. This is precisely the constraint equation (<ref>). § REDUCING THE CONSTRAINTS WITH THE COVARIANT BUILDING BLOCKS: INTERMEDIATE ALGEBRA A bit of cumbersome (but straightforward) algebra leads to the following identities written most shortly in the α-ω formulation, Y_αμs^αp̂^μ =-α^(0,-1)_B, ∇_μ𝒵p̂^μ =-α_A^(0,-1), R^*_ναβρs^α s^βp̂^νp̂^ρ =3Mω^(0,3)_B^2+M(𝒜^2+𝒫^2𝒮^2)ω_1^(0,3), ∇_λ𝒵 R*λαβρ s^α s^βp̂^ρ = 3M/2 (E_sω^(0,2)_B-Dω^(1,3)_B) +M(𝒮^2α^(0,-1)_A-𝒜α^(0,-1)_C)ω_1^(0,3), ∇_λ𝒵 R*λνβρ s^βp̂^νp̂^ρ = 3M/2(Eω^(0,2)_B-Fω^(1,3)_B) +M(𝒜α^(0,-1)_A+𝒫^2α^(0,-1)_C)α^(0,3)_1, Y_αλR*λνβρs^α s^βp̂^νp̂^ρ =-M𝒜ω^(0,2)_B+M/2(𝒜ω^(1,3)_B̅-3Gω^(1,3)_B), Y_μλR*λνβρ s^βp̂^μp̂^νp̂^ρ =M𝒫^2ω^(0,2)_B-M/2(𝒫^2ω^(1,3)_B̅+3Hω^(1,3)_B), Y_μλR*λαβρ s^α s^βp̂^μp̂^ρ =-M𝒜ω^(0,2)_B+M/2(𝒜ω^(1,3)_B̅-3Gω^(1,3)_B), ξ_λ R*λνβρ s^βp̂^νp̂^ρ = -3Mω^(0,3)_AB+M(E_s𝒫^2+E𝒜)ω^(0,3)_1, Y_λκR*λκβρ s^βp̂^ρ = 4Mω^(0,2)_B, Y_αλξ_κ R*λκβρ s^α s^βp̂^ρ = M/2(E_Sω^(0,2)_B-3Dω^(1,3)_B) +M(𝒜ω^(0,-1)_C-𝒮^2ω^(0,-1)_A) α^(0,3)_1, Y_νλξ_κ R*λκβρ s^βp̂^νp̂^ρ =M/2(Eω^(0,2)_B-3Fω^(1,3)_B) -M(𝒜ω^(0,-1)_A+𝒫^2ω^(0,-1)_C)α_1^(0,3), ξ_λ Y_ακR*λρβκs^α s^βp̂^ρ =M(𝒮^2ω^(0,2)_A+E_sω^(1,3)_B̅-𝒜ω^(1,3)_C̅) +M/2(3 Iω^(1,3)_A+𝒮^2ω^(1,3)_A̅), ξ_λ Y_βκR*λνρκ s^βp̂^νp̂^ρ =3M/2(𝒜ω^(0,2)_A+Gω^(1,3)_A) +M(Eα^(0,-1)_B+𝒫^2α^(0,-1)_C)ω_1^(0,3). Let us turn to identities involving covariant derivatives of the Riemann tensor. Making use of the identities (<ref>) enforces the fundamental relation: R_ανβρ;μ =M∇_μ(3(ℛ N_αν)(ℛ) N_βρ-ℛ^2 G_ανβρ/ℛ^5) =-M{ℛ^-4[5N_μλξ^λ(3 N_ανN_βρ-G_ανβρ) -3(G_ανμλξ^λ N_βρ+N_ανG_βρμλξ^λ)+2 N_μλξ^λ G_ανβρ]}. It leads to the following `differential-to-algebraic' identities: ∇_μ R_ανβρs^α s^βp̂^μp̂^νp̂^ρ =3Mℛ^-4[5AB^2-2B(𝒜E+𝒫^2 E_S)+A(𝒮^2𝒫^2+𝒜^2)] , K_αλ∇_μ Rλνβρ s^α s^βp̂^μp̂^νp̂^ρ =-3M/2(ℛ^2) ℛ^-4[5AB^2-2B(𝒜E+𝒫^2 E_S)+A(𝒮^2𝒫^2+𝒜^2)] -3M/2ℛ^2{ℛ^-4[5AB^2+A(𝒫^2 I+𝒜 G)-B(GE-D𝒫^2) -B̅(𝒜 E+𝒫^2 E_s)]} , K_νλ∇_μ Rλαβρ s^α s^βp̂^μp̂^νp̂^ρ =3M/2(ℛ^2) ℛ^-4[5AB^2-2B(𝒜E+𝒫^2 E_S)+A(𝒮^2𝒫^2+𝒜^2)] +3M/2ℛ^2{ℛ^-4[5AB^2+A(𝒜 G-𝒮^2 H) +B(E_sH+𝒜 F+A̅B-AB̅)-B̅(𝒜 E+𝒫^2E_s)]} , K_μλ∇_α Rλνβρ s^α s^βp̂^μp̂^νp̂^ρ =3M/2ℛ^2{ℛ^-4[C(𝒫^2 G+𝒜 H)-B(EG+𝒜F)+B(A̅B-AB̅)]}. §.§.§ Ansatz terms for the quadratic invariant We provide here the explicit form of the terms constituting the Ansatz for the black hole quadratic invariant in terms of the covariant building blocks. We use the notation N^(A)≜ N_μανβ^(A)s^α s^βp̂^μp̂^ν (A=1,…,4). N^(1) =-MB^2α^(1,2)_1+M/4[3(α_B^2^(0,1)+α_B^2^(2,3)) +(𝒜^2+𝒫^2𝒮^2)(α^(0,1)_1+α^(2,3)_1)], N^(2) =-M/4[(𝒜^2+𝒫^2𝒮^2)α_1^(0,1)+α_B^2^(0,1)], N^(3) =1/4[-(𝒜^2+𝒫^2𝒮^2)+E (E 𝒮^2-E_S𝒜)-E_S(E_S𝒫^2+𝒜 E)] +M/2(𝒜^2+𝒫^2𝒮^2)α^(0,1)_1, N^(4) =1/2(𝒜^2+𝒫^2𝒮^2)(2Mα^(0,1)_1-1). §.§.§ Derivatives of the scalar basis A direct computation gives the following identities ∇̂𝒮^2 =∇̂𝒫^2=∇̂𝒜=0, ∇̂A =-i(E^2+𝒫^2ξ^2)ℛ^-1+iM/2(𝒫^2/ℛ^2+H/ℛ̅^2)-i A^2ℛ^-1, ∇̂B =i(𝒜E+𝒫^2E_s)ℛ^-1-iABℛ^-1, ∇̂C =i(𝒜ξ^2-EE_s)ℛ^-1+iM/2(G/ℛ̅^2-𝒜/ℛ^2)-iACℛ^-1, ∇̂D =2Eω^(0,1)_C̅-2ξ^2ω^(0,1)_B̅+Mω^(0,2)_B̅+2Dω_A^(0,1), ∇̂E =0, ∇̂E_s =-Mω_B^(0,2), ∇̂F =2Eω_A̅^(0,1)+2Fω_A^(0,1), ∇̂G =2Gω_A^(0,1)+2Eω_B̅^(0,1)+2𝒫^2ω_C̅^(0,1), ∇̂H =2ω_A^(0,1)H+2𝒫^2ω_A̅^(0,1) , ∇̂I =2𝒮^2ω^(0,1)_A̅+4E_sω^(0,1)_B̅-4𝒜ω^(0,1)_C̅+2Iω_A^(0,1). §.§.§ Directional derivatives of the Ansatz terms We aim to compute the contributions ∇̂N^(A). Let us proceed step by step. First, we compute N_αβγδ^(A) for each A=1,… 4. In Ricci-flat spacetimes, using the identity (<ref>), we obtain N^(1)_αβγδ=-1/2K R_αβγδ+ℳ^(1)_[αβ]γδ where K≜ Kαα. Moreover, noticing that ^*ℳ^(2)_αβγδ =Y_λ[α ^*Rλβ]σδYσγ=-R*σδ[αλY_β]λYσγ and using the symmetries of the Riemann tensor, we can write N^(2)_ανβρ =-Y_λ[αλν][βσY_ρ]σ. In Ricci-flat spacetimes, this boils down to N^(2)_ανβρ =Y_λ[α Rλν][βσY_ρ]σ. The two last computations are more straightforward and give N^(3)_αβγδ =1/2N^(4)_αβγδ-ξ_[αg_β][γξ_δ], N^(4)_αβγδ=-g_α[γg_δ]βξ^2. We shall evaluate the following covariant derivatives: ∇_μ(K R_ανβρ) s^α s^βp̂^μp̂^νp̂^ρ =(∇_μ K R_ανβρ+ K∇_μ R_ανβρ)s^α s^βp̂^μp̂^νp̂^ρ. Using (<ref>) the first term of the right-hand side of (<ref>) can be written ∇_μ K R_ανβρs^α s^βp̂^μp̂^νp̂^ρ ={ 4(ξ_λ Y_αμ+ξ_μ Y_λα-ξ_α Y_λμ)R*λνβρ +2[g_αμ(2Y_λνξ_κ-Y_λκξ_ν)+g_μν(2ξ_λ Y_κα+ξ_α Y_λκ)]R*λκβρ} s^α s^βp̂^μp̂^νp̂^ρ =4[ (ξ_λ Y_αμ+ξ_μ Y_λα-ξ_α Y_λμ)R*λνβρ +(Y_λκξ_[α+2ξ_λ Y_κ[α)g_μ]ν R*λκβρ] s^α s^βp̂^μp̂^νp̂^ρ. Gathering the two pieces above yields ∇_μ N^(1)_ανβρ s^α s^βp̂^μp̂^νp̂^ρ =[K_λ[αRλν]βρ;μ-1/2K∇_μ R_ανβρ-1/2∇_μ𝒵 R^*_ανβρ -1/2(∇_λ𝒵 g_μν+Y_λμξ_ν)R*λαβρ +1/2(∇_λ𝒵 g_μα+ 3Y_λμξ_α+Y_μαξ_λ +2Y_αλξ_μ)R*λνβρ-2(Y_λκξ_[α+ξ_λ Y_κ[α)g_μ]νR*λκβρ]s^α s^βp̂^μp̂^νp̂^ρ. On the other hand, ∇_μ N^(2)_ανβρ s^α s^βp̂^μp̂^νp̂^ρ =∇_μ(Y_λ[α Rλν][βσY_ρ]σ) s^α s^βp̂^μp̂^νp̂^ρ =∇_μ(Y_λα RλνβσY_ρσ) p̂^μ s^[αp̂^ν] s^[βp̂^ρ] =(2∇_μ Y_λαY_ρσRλνβσ+Y_λαY_ρσ∇_μ Rλνβσ)p̂^μ s^[αp̂^ν] s^[βp̂^ρ] =(2ϵ_μλακξ^κ Y_ρσRλνβσ+Y_λαY_ρσ∇_μ Rλνβσ)p̂^μ s^[αp̂^ν] s^[βp̂^ρ] =1/22ϵ_μλακξ^κ Y_ρσRλνβσ s^αp̂^μp̂^ν s^[βp̂^ρ] + Y_λ[α|∇_μ Rλ|ν][βσY_ρ]σs^α s^βp̂^μp̂^νp̂^ρ =[1/2Y_αλξ_μ R*λνβρ-1/2Y_μλξ_ν R*λαβρ-1/2(g_μνY_ακ+g_αμY_νκ)ξ_λ R*λρβκ +1/2g_μνξ_λ Y_ρκR*λαβκ+1/2g_αμξ_λ Y_βκR*λνρκ + Y_λ[α|∇_μ Rλ|ν][βσY_ρ]σ] × s^α s^βp̂^μp̂^νp̂^ρ. Finally, one has ∇_μ(ξ_(αξ_β))=2∇_μξ_(αξ_β). In Ricci-flat spacetimes, Eq. (157) of <cit.> boils down to the identity ∇_αξ_β=-1/4R^*_αβγδY^γδ. All in all, we obtain the relations ∇_μ(ξ_(αξ_β)) =1/2ξ_(αR^*_β)μγδY^γδ, ∇_μ(ξ^2)=1/2ξ_λ R*λμγδY^γδ. This yields ∇_μ N^(3)_ανβρ s^α s^βp̂^μp̂^νp̂^ρ =[1/2∇_μ N^(4)_ανβρ-∇_μ(ξ_[αg_ν][βξ_ρ])]s^α s^βp̂^μp̂^νp̂^ρ =[1/2∇_μ N^(4)_ανβρ+1/2ξ_[αg_ν][ρR^*_β]μγδY^γδ]s^α s^βp̂^μp̂^νp̂^ρ =[1/2∇_μ N^(4)_ανβρ+1/4ξ_[αg_μ]νR^*_βργδY^γδ]s^α s^βp̂^μp̂^νp̂^ρ as well as ∇_μ N^(4)_ανβρ s^α s^βp̂^μp̂^νp̂^ρ = -g_α[βg_ρ]ν∇_μ(ξ^2) s^α s^βp̂^μp̂^νp̂^ρ =-1/2g_α[βg_μ]νξ_λ R*λργδY^γδs^α s^βp̂^μp̂^νp̂^ρ. Let us now use the preceding relations to write down the desired contribution in the covariant building blocks language. We will demonstrate the procedure on the A=1 term, which turns out to be the most involved to compute. The computations of the others contributions will not be detailed in this text. One has (K_λ[αRλν]βρ;μ-1/2K∇_μ R_ανβρ)s^α s^βp̂^μp̂^νp̂^ρ=15M/4(ω^(0,2)_AB^2+ω^(2,4)_AB^2) +3M/4(𝒮^2𝒫^2+𝒜^2)(ω^(0,2)_A+ω^(2,4)_A) -3M/2(𝒜E+𝒫^2E_s)(ω^(0,2)_B+ω^(2,4)_B) -3M/4(𝒫^2I-𝒮^2 H+10B^2+2𝒜G)ω^(1,3)_A -3M/4(𝒫^2D-EG+E_sH+𝒜F)ω^(1,3)_B +3M/2(𝒜E+𝒫^2E_s)ω^(1,3)_B̅-3M/2(A̅B)α^(1,3)_B. Using the various identities derived above in Eq. (<ref>) allows to write ∇̂N^(1) =15M/4(ω^(0,2)_AB^2+ω^(2,4)_AB^2)-3M/2(ω^(0,3)_B^2α^(0,-1)_A+ω^(0,3)_ABα^(0,-1)_B) +M/4(𝒜^2+𝒫^2𝒮^2)(ω^(0,2)_A+2ω^(1,3)_A̅+3ω^(2,4)_A) -M/2(𝒜E+𝒫^2E_s)(5ω^(0,2)_B-2ω^(1,3)_B̅+3ω^(2,4)_B) -3M/4(10B^2+2𝒜G+𝒫^2I-𝒮^2H)ω^(1,3)_A -3M(𝒜F-EG+E_sH+𝒫^2D)ω^(1,3)_B-3M/2ω^(0,0)_A̅Bα^(1,3)_B. Making use of the relations (<ref>) allows to express DN^(1) in terms of linearly independent contributions. When the dust settles down, we are left with ∇̂N^(1) =M/4(𝒜^2+𝒫^2𝒮^2)(ω^(0,2)_A+2ω^(1,3)_A̅+3ω^(2,4)_A) -M/2(𝒜E+𝒫^2E_s)(5ω^(0,2)_B-2ω^(1,3)_B̅+3ω^(2,4)_B) -9M/2B^2ω^(1,3)_A -3M/2(𝒜F-EG+E_sH+𝒫^2D)ω^(1,3)_B +9M/4ω^(0,2)_AB^2+15M/4ω^(2,4)_AB^2. a empty tocchapterIndex of the main equations empty tocchapterBibliography unsrt empty gobble a dukeblue a < g r a p h i c s >
http://arxiv.org/abs/2307.01137v1
20230703161950
Exploring the In-context Learning Ability of Large Language Model for Biomedical Concept Linking
[ "Qinyong Wang", "Zhenxiang Gao", "Rong Xu" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Ultrafast electro-optic Time-Frequency Fractional Fourier Imaging at the Single-Photon Level Michał Parniak August 1, 2023 ============================================================================================ The biomedical field relies heavily on concept linking in various areas such as literature mining, graph alignment, information retrieval, question-answering, data, and knowledge integration. Although large language models (LLMs) have made significant strides in many natural language processing tasks, their effectiveness in biomedical concept mapping is yet to be fully explored. This research investigates a method that exploits the in-context learning (ICL) capabilities of large models for biomedical concept linking. The proposed approach adopts a two-stage retrieve-and-rank framework. Initially, biomedical concepts are embedded using language models, and then embedding similarity is utilized to retrieve the top candidates. These candidates' contextual information is subsequently incorporated into the prompt and processed by a large language model to re-rank the concepts. This approach achieved an accuracy of 90.1% in BC5CDR disease entity normalization and 94.7% in chemical entity normalization, exhibiting a competitive performance relative to supervised learning methods. Further, it showed a significant improvement, with an over 20-point absolute increase in F1 score on an oncology matching dataset. Extensive qualitative assessments were conducted, and the benefits and potential shortcomings of using large language models within the biomedical domain were discussed. § INTRODUCTION Biomedical concept linking is a critical procedure in knowledge integration<cit.> and information retrieval<cit.>. This process identifies biomedical concepts within the text and associates these concepts with matching entities in a biomedical knowledge base. It essentially forms a bridge between text and structured knowledge databases, facilitating the efficient extraction and utilization of intricate biomedical information. Concept linking is integral to diverse applications, such as literature mining, graph alignment<cit.>, and information retrieval within the biomedical domain. Moreover, the efficacy of concept linking directly influences the performance of graph-based algorithms, search algorithms, and question-answering systems. While significant progress has been made in the field of biomedical concept linking, major issues still pertain to their limited capacity to handle the ambiguity and complexity characteristic of biomedical concepts<cit.>. Supervised Training or fine-tuning methods typically require extensive labeled data, which is labor-intensive and expensive to compile <cit.>. The reliance on labeled data also introduces the data expiration problem, given the evolving nature of biomedical knowledge. For instance, biomedical entities linking datasets often utilize an ontology system for labeling text mentions; however, these systems change over time. The Medical Subject Headings (MeSH) system housed 28,000 concepts in 2016, but by 2023, the count has increased to 32,000 concepts<cit.>. Consequently, if we were to employ label-dependent supervised methods, models may need recurrent retraining to stay updated. Additionally, these techniques are often task-specific, lacking the requisite adaptability to handle different datasets or tasks without comprehensive retraining. There is a need for a more generalized framework for biomedical concept linking that can navigate the complex landscape of biomedical text data effectively. The ideal system should be adaptable, capable of processing various datasets and tasks without the need for task-specific training data. It should also possess the robustness to handle the ambiguity and complexity of biomedical concepts. Biomedical concept linking can be described as the process of identifying concepts within a given text and associating them with corresponding concepts in a biomedical knowledge base. This task encompasses a range of specific tasks, including biomedical entity linking<cit.>, disease name normalization<cit.>, and ontology matching <cit.>. Biomedical entity linking or disease name normalization typically involves mapping unstructured text to an ontology system. On the other hand, ontology matching refers to identifying identical concepts across two distinct ontology systems and establishing a link between them. Notably, there are differences between tasks like entity normalization and ontology matching. While entity normalization operates on free text, ontology matching deals with more structured text and the related contextual information of the concepts. Furthermore, entity normalization typically encompasses a smaller percentage of an ontology's concepts, while ontology matching often covers a larger and more diverse array of concepts. Biomedical concept linking extends beyond tasks such as entity normalization<cit.> and ontology matching<cit.>. For instance, a concept linking method should be capable of matching two heterogeneous biomedical graphs at the concept level. This task presents a significant challenge for supervised training methods, as it's nearly impossible to generate labels for arbitrary heterogeneous graphs. It's important to note that concept linking does not include entity recognition<cit.>, a common procedure in text-mining tasks. Entity recognition often precedes concept linking and may not even feature in some tasks. Thus, concept linking is a distinct, wider task, facilitating effective navigation of intricate biomedical information. Traditional BERT-based methods may struggle to adapt to the diverse array of datasets and tasks associated with this field<cit.>. LLMs <cit.>, however, have recently demonstrated remarkable proficiency in the biomedical domain <cit.>. This paper aims to investigate a generalized and effective framework for biomedical concept linking, leveraging the ICL capabilities of LLMs. In-context learning forms the backbone of our proposed methodology. It operates on the principle of learning by analogy, offering a unique method for LLMs to make informed predictions<cit.>. This novel paradigm offers several compelling benefits. As the demonstration is rendered in natural language, it provides an interpretive interface for interaction with LLMs, making it significantly simpler to incorporate biomedical knowledge into LLMs by altering the demonstration and templates. Compared to supervised training, in-context learning is a training-free learning framework. This drastically cuts down the developing time required to adapt the model to new tasks. As such, it can be readily applied to real-world tasks, broadening its applicability and utility. In this research, we propose a classic yet effective, generalized methodology for biomedical concept linking that leverages the ICL capabilities of LLMs. Our method involves a two-stage retrieve-and-rank system: The first stage embeds biomedical concepts using language models and uses these embedding to retrieve top candidate concepts. In the second stage, the contextual information of these candidates is incorporated into the prompt, and an LLM ranks these concepts. Our proposed method presents several advantages. Firstly, it is adaptable, requiring no task-specific training, and can be applied to different datasets and tasks. Secondly, it demonstrates competitive performance with state-of-the-art supervised learning methods<cit.>, as evidenced by our results on entity normalization datasets and ontology matching datasets. Lastly, by leveraging the ICL abilities of large models, it effectively navigates the inherent ambiguity and complexity of biomedical concepts, significantly improving the efficacy of concept linking in the biomedical domain. The objective of this paper is to delve into a more comprehensive framework utilizing LLMs for broader and more challenging tasks in the biomedical field. Our contributions are manifold; We identify a straightforward yet effective approach for tackling the complexity inherent in biomedical linking problems, and we also carry out an extensive application test to scrutinize various embedding methods and different language models. We further analyze the functioning of the large model and identify circumstances leading to its failure. A qualitative test is conducted to provide nuanced insights into the model's operation. All these explorations collectively guide the development of the next generation of accurate and trustworthy artificial intelligence solutions in the biomedical domain. § BACKGROUNDS Entity Linking and Entity Normalization Entity linking refers to the task of mapping mentions in free text to unique concepts in ontologies <cit.>. This can take the form of linking a certain drug to its specific drug ID or associating the disease with its corresponding disease/symptom ID. The Unified Medical Language System (UMLS) <cit.> is a compendium of biomedical vocabularies, The Unified Medical Language System (UMLS), a representative ontology for biomedicine, contains over 4 million entities. UMLS has been extensively used as a knowledge base to link biomedical entities in the text to their corresponding concepts. These tasks often involve mapping free text terms in the biomedical literature to UMLS Concept Unique Identifiers(CUIs). A notable tool in this field is the MetaMap system <cit.>. MetaMap utilizes natural language processing techniques to map biomedical text to concepts in the UMLS. Although robust, its rule-based method can struggle with semantic ambiguity, demanding supplementary solutions like advanced deep learning models to improve its accuracy and adaptability. Traditional methods often rely on rule-based approaches or string matching<cit.>, which unfortunately proved to be ill-suited for dealing with concepts carrying contextual meanings and disease subtypes. A popular approach in recent years has been to utilize BERT-based methods<cit.>, which are primarily supervised. However, these techniques encounter significant challenges due to the scarcity of annotated examples, particularly given the vast number of entities involved. Recently, the development of self-supervised methods has introduced a fresh perspective<cit.>. These methods, requiring no supervised samples, have demonstrated comparable results to supervised training methods. Despite these advances, such techniques cannot be applied universally across an array of tasks. Additionally, the process of constructing a dataset and training model is time-consuming. By leveraging the ICL capabilities of LLMs, the difficulties associated with constructing training corpora and executing training are significantly alleviated. By adjusting the prompts in natural language, this framework can be easily adapted to various tasks, demonstrating its potential for efficiency and versatility. Ontology Matching Ontology matching, a key area of research, is the process of identifying corresponding entities or concepts across diverse ontology systems<cit.>. This procedure is fundamental for integrating heterogeneous databases<cit.> and enhancing interoperability in the biomedical sector. Historically, ontology matching approaches primarily relied on exploiting lexical, structural, and semantic similarities<cit.>. The advent of deep learning has ushered in an era where many current studies are investigating the use of Transformer, or specifically BERT<cit.>, for ontology matching. However, these BERT-based methods often grapple with the challenges previously outlined. Moreover, their performance is less than ideal when it comes to biomedical ontology<cit.>. Biomedical ontology often encompasses a multitude of concepts that demand expert-level understanding, such as the ability to distinguish between two rare diseases that may appear similar but are caused by distinct genes. BERT-based methods frequently struggle to address these scenarios. Yet, the identification of the relationship between genes and diseases is paramount to advancements in biomedical discoveries. By capitalizing on the in-context learning capabilities of large language models, we're able to more accurately differentiate between complex biomedical concepts and enhance the efficacy of ontology matching in the biomedical sector. Text Embedding The quality of text embedding plays a critical role in enhancing the recall rate of concept linking. Many text embedding methods Lately, transformer-based models have been gaining traction due to their ability to generate context-aware embeddings. Moreover, recent studies have made significant strides in training BERT models on the biomedical text<cit.> and using innovative training techniques such as contrastive learning<cit.>. Domain-specific language models, those specifically trained on biomedical texts, have demonstrated superior performance compared to standard BERT models<cit.>. This performance boost highlights the importance of domain-specific knowledge in enhancing the accuracy of embedding. Large Language Models LLMs have demonstrated remarkable capabilities in natural language understanding and generation. These models are trained on massive amounts of text data and can generate coherent and contextually appropriate responses <cit.>. However, they often lack domain-specific knowledge and struggle with understanding specialized terminologies, which is a crucial aspect of biomedical concept linking. LLMs are built using deep learning architectures like Transformers<cit.> and have demonstrated remarkable proficiency in understanding and generating human-like text. Two of the most well-known LLMs are GPT-3<cit.> and GPT4<cit.>. GPT-3, with 175 billion parameters, has displayed impressive results in a wide range of NLP tasks. The most popular open-source LLM is LLaMa<cit.> which showed comparatively good performers with GPT 3.5, and there's a wide range of domain-specific fine-tuned llama models<cit.> from 7 billion to 65 billion parameters. However, the usage of LLMs also presents challenges. One such issue is the "hallucination" problem <cit.>, where the model generates outputs that seem plausible but are factually incorrect. Furthermore, due to their size and complexity, these models require substantial computational resources for training and deployment. Despite these challenges, LLMs have ushered in a new era in NLP and are continuously being explored for their potential in a wide range of biomedical concept linking. In-context Learning Generally, in-context learning necessitates a few examples to create a demonstration context<cit.>. These examples are typically expressed using natural language templates. Following this, a query question is concatenated with the demonstration context to generate a prompt. This prompt is then processed by the language model to predict an outcome. The definition of in-context learning is continuously evolving, in our proposed method, we not only include the conventional approach of using knowledge examples in the prompt, but we also supply more relevant information related to a given biomedical concept. By doing so, we equip the LLM with the necessary contextual information, thereby LLM learns from extra information rather than just analogy. Unlike supervised learning which necessitates a training stage involving backward gradients<cit.> for model parameter updates, ICL eschews parameter updates and makes predictions directly using pre-trained LLMs. The expectation is that the model will discern patterns hidden within the demonstration and make appropriate predictions accordingly<cit.>. § METHODOLOGY Formally, we set the objective of concept linking as the development of an algorithm : (e_source, C_source) → (e_target, C_target). This algorithm maps a source entity e_source within the context C_source to a unique target entity e_target with context C_target. The source entity could be derived from free text, a graph, or a source ontology system in ontology matching, and it's worth noting that C_source may sometimes be absent. Generally, we require the context of the target concept to be provided. A concept is more well-defined when its associated information is supplied. In the process of developing a zero-training algorithm, we operate under the assumption that no access to gold-mention examples or labels is available. Our assumption extends to the availability of a target domain ontology O_target and an unlabeled text corpus T, or a source ontology O_source. Specifically, we necessitate a concept list that provides a unique identifier, a canonical name, and a description for each concept. Our framework also has the capacity to incorporate additional knowledge present in the ontology. §.§ Text embedding The first stage in our methodology involves transforming textual data into semantic representations. The quality of these embeddings is crucial as it significantly affects downstream tasks<cit.>. To meet our objective of exploring a training-free framework, we opt for three different embedding models. Our first model of choice is SapBERT<cit.>, a Self-Aligned Pretrained BERT model specifically designed for the biomedical domain. Serving as a representation of the BERT<cit.> family of models, SapBERT has superior performance in biomedical tasks. Next, we leverage the LLaMa model's embeddings <cit.>. LLaMa symbolizes an open-source option for Large Language Model embeddings, boasting high applicability across diverse language tasks. Finally, we utilize GPT-3 embeddings, specifically "text-embedding-ada-002", representing one of the most powerful and proprietary embedding methodologies currently available. Considering a target ontology O_target and an embedding model f_emb(), we generate text embeddings for each entity/concept e_target. This process involves generating embeddings for both the canonical concept name string and a combined version that includes the name string and its context. The purpose of creating an 'entity-name-only' representation is to recall entities that can be easily matched with the string, serving as an efficient approach for exact or simple matches. On the other hand, generating ‘entity-name-context' embeddings targets a more complex objective. Despite entities not bearing similarity in appearance, they may be describing the same concept, and this intricate relation can be captured through context-inclusive embeddings. This dual approach caters to both explicit matches and the nuanced equivalences in the realm of biomedical concepts. As we will illustrate in the appendix, embedding with context plays a significant role in the success of our approach. §.§ Candidate generation Following the generation of contextual embeddings, we persist all embeddings from the ontology into a vector database, alternatively referred to as a long-term memory store <cit.>. This enables efficient computation of cosine similarity between any given query text embedding and all the ontology embeddings. This stored ontology is referred to as `mem`. When a query entity (e_source, C_source) is presented, we employ the same embedding process. The top k candidates are then retrieved based on the cosine similarity of their contextual embeddings. The process of memory creation and candidate generation can be outlined in Algorithm <ref>. §.§ Rank with LLM LLMs possess text comprehension capabilities and a degree of logical reasoning ability<cit.>. Consequently, our approach revolves around providing comprehensive contextual information related to the biomedical concept linking task, enabling the model to execute an extensive reading task and subsequently select the most appropriate answer from the given options. When constructing the prompt, we initially define the task and inform the model that our aim is to identify analogous concepts. We then present the candidate concepts retrieved from long-term memory. These candidates are options within the prompt. Further, we fetch the descriptions of these candidates from the ontology and associated text of the source entity. Ultimately, the prompt asks the model to select the concept that aligns best with the options; if none are suitable, the model is to select the 'None' option. The configuration of the prompt is adaptable, accommodating the unique requirements of different tasks. For instance, in entity linking tasks, we may also include related text. The BC5CDR dataset<cit.>, which extracts named entities from PubMed abstracts, would necessitate the addition of abstract tags within the prompt. Similarly, for tasks like graph alignment, we could incorporate neighborhood information into the prompt. The overall workflow of the ontology matching task is illustrated in figure <ref>. § EXPERIMENTS §.§ Dataset Choosing an appropriate dataset to benchmark our proposed method and probe the capabilities of LLMs poses a few challenges. Firstly, as LLMs are trained on a vast amount of text data including published papers and webpages, data leakage becomes an inevitable concern for many existing datasets. Estimating the impact of this leakage on performance is not straightforward<cit.>. Moreover, biomedical NLP datasets can be sensitive, with some prohibiting any form of redistribution. This becomes problematic when using the GPT API, as the dataset is exposed to OpenAI, potentially leading to indirect redistribution via the LLM. Another obstacle is the slow inference speed of LLMs. For datasets with over 100,000 samples, the inference could take more than ten days, and with multiple models to benchmark and various ablation tests to conduct, it necessitates a smaller dataset. Consequently, we chose the BC5CDR dataset<cit.> for benchmarking. This well-known dataset in biomedical entity normalization requires mapping named entities in PubMed<cit.> abstracts to unique MeSH IDs. It encompasses two types of entities - chemicals, and diseases. By using this dataset, we can readily compare our proposed framework with previous supervised or self-supervised training methods<cit.>. Additionally, the BC5CDR dataset is relatively small, with 4797 mentions in the test set, making it manageable given the slow inference speed of LLMs. For the ontology matching task, we selected the Machine Learning-Friendly Biomedical Datasets for Equivalence and Subsumption Ontology Matching <cit.>, published in 2022. Being recently published, unlikely to be accessed by Llama, GPT3, or GPT4. We focused on two challenging sub-tasks from this dataset: OMIM-ORDO and SNOMED-NCIT Neoplas. OMIM (Online Mendelian Inheritance in Man) <cit.> OMIM provides extensive data on genes and genetic phenotypes and their relationships, curated meticulously from biomedical literature. ORDO (Orphanet Rare Disease Ontology)<cit.>encompasses a classification of rare diseases and establishes relationships between diseases, genes, and epidemiological features. Given that many rare diseases are genetic disorders, ORDO and OMIM share considerable overlap. However, linking these rare disease names poses a significant challenge. Such diseases are typically unfamiliar to individuals without specialized medical knowledge, and their mentions in literature are often infrequent. We selected SNOMED-NCIT Neoplas <cit.> ontology matching, as differentiating neoplasm names is challenging for Bert-based methods. The test sets for OMIM-ORDO and SNOMED-NCIT Neoplas contain 3,721 pairs and 3,804 pairs, respectively. §.§.§ Implementation details Unlike many previous studies that utilize somewhat complex systems, such as developing corpora for fine-tuning, and incorporating synonym dictionaries, and abbreviation dictionaries<cit.>, our approach is guided by the principle of simplicity. Our goal is to establish a universal framework for biomedical concept linking, without adding complexity or tailoring our system to a specific task or dataset. The only aspect we modify is the prompt. For instance, in the BC5CDR task, we include the PubMed abstract text and insert the instruction "read the abstract" in the prompt. We perform one-shot learning in the ablation test, similar to the application of Chain of Thoughts<cit.>. However, we do not use the Self-Consistency method<cit.> in this paper. The reason is discussed in the appendix. We chose to include GPT-3.5-turbo (ChatGPT) in our study because it is one of the most widely known LLMs and offers the advantages of being both fast and cost-effective. We also decided to incorporate GPT-4, given its exceptional power and performance. Finally, we used a 4-bit quantization of LLama-65b (known as alpaca-lora), which is a highly popular open-source LLM that can be conveniently deployed on a standard desktop computer due to its quantization. For the LLama-65b, we utilized a desktop machine equipped with 64GB of RAM, running llama-cpp for inference. As for GPT-3.5 and GPT-4, we accessed these models through OpenAI's API, conducting our experiments on an ordinary laptop. §.§.§ Evaluation As our proposed framework does not require training, we have no need for training and development sets. We directly evaluate our framework using the test sets from our chosen datasets. As discussed in the dataset section, benchmarking larger models needs to take data leakage into account, as it may impact the quantitative results. For BC5CDR, we employ accuracy as the metric for evaluation, consistent with previous research, enabling comparison. For comparison, we choose KRISSBERT<cit.>, BERN2<cit.>, ScispaCy<cit.>, and QuickUMLS<cit.> as baselines. KRISSBERT is representative of self-supervised training methods that, like ours, do not require training and development datasets. BERN2 is a hybrid system that employs both rule-based and BERT models for named entity normalization and claims superior performance. ScispaCy is a BERT-based method.QuickUMLS is a dictionary-based method. For the ontology matching task, we make comparisons with LSMatch<cit.>, ATMatcher<cit.>, LogMap<cit.>, and BERTMap<cit.>. Among these, BERTMap is the most recent and capable contender. And we use Precision, Recall, and F1 score as our evaluation criteria. We also test the effect of using context information or one-shot learning in the prompt. Our primary objective is to delve into the capabilities of LLMs for biomedical concepts linking in with ICL. Consequently, we also undertake extensive qualitative result analysis. We will assess both false positives and false negatives, providing a more comprehensive evaluation of our model's performance. Furthermore, we ask the model to elucidate the rationale behind its concept linking decisions, a practice known as process correctness<cit.>. In the quest to build accurate and trustworthy AI in the biomedical field, achieving the correct predictions is crucial, but equally important is understanding the explanations underpinning these results. § RESULTS §.§ Quantitative results §.§.§ Main results The results of our framework on the BC5CDR dataset are presented in Table  <ref>. Utilizing GPT-4 as the ranker, our model achieved an accuracy of 90.1% on disease name entity linking and 94.7% on chemical name entity linking. We primarily compared our approach with the self-supervised KRISSBERT method and the more complex hybrid system BERN2. In terms of linking disease names, our model's results surpass KRISSBERT's and are competitive with BERN2's, with just a 3.85% difference. Notably, our results were achieved without the use of any customized rules, abbreviation dictionaries, or synonym dictionaries. For chemical name entity linking, our model's performance is approximately 2% lower than KRISSBERT's and BERN2's. Considering our framework requires no training, these results are quite promising and outperform earlier BERT-based methods like ScispaCy. When we switch to GPT-3 as the ranker, the performance remains reasonably good. However, with the Llama model as a ranker, performance drops significantly for chemical entity linking, even falling behind early BERT models. Table  <ref> presents the results of ontology matching between OMIM and ORDO and ontology matching between SNOMED and NCIT Neoplas. Our framework, utilizing the ICL capabilities of GPT-4, achieved over a 20 percentage point increase in F1 score in comparison to the previous best-performing method, BERTmap. When GPT-3.5 was employed as the ranker, there was still a notable increase of approximately 10 percentage points in the F1 score. These outcomes underscore the effectiveness of the ICL provided by LLMs. Meanwhile, Llama's performance just marginally surpassed BERTmap in OMIM-ORDO matching but lagged behind in the SNOMED-NCIT Neoplas matching task. Given that we used a 4-bit quantized version of Llama that was not specifically aligned to biomedical tasks, the results remain promising. However, the disparity in performance between Llama and the GPT models indicates that there's significant room for the enhancement of open-source LLMs in the future. §.§.§ Ablation test Table  <ref> presents the results of various prompting methods for rare disease concept matching in OMIM-ORDO (Disease) using GPT-4. The findings indicate that without the use of a one-shot example and no context information about these rare disease concepts, the F1-score is merely 0.698. This is approximately 19 points lower than the proposed method and shows no significant improvement compared to the previous BERTMap method. These results suggest that even the most powerful language model does not automatically perform well on certain biomedical tasks without additional context. The implementation of both one-shot learning and the addition of related concept information significantly improves performance, demonstrating the value of using the ICL ability of large language models for concept-linking tasks.It's interesting to note that adding OMIM context information provides a larger performance increase than one-shot learning without context. When combining both one-shot learning and OMIM context, the performance increase is marginal compared to just using OMIM context. This suggests that introducing the correct and relevant information for each case is more beneficial than providing an analogous example. Interestingly, it's noteworthy to observe that one-shot learning significantly enhances precision to a greater extent than it does recall. §.§ Qualitative results §.§.§ Abbreviations Abbreviations are a prevalent feature in biomedical text. Previous methods<cit.> we compared employed an abbreviation dictionary to enhance performance. But does an LLM understand biomedical abbreviations? The answer is affirmative, but LLMs tend to struggle with less common abbreviations. For familiar abbreviations such as AD (Alzheimer's Disease) or PD (Parkinson's Disease), LLMs can easily link them to the correct concept when provided with a medical context. However, for less common abbreviations like MR (Mitral Valve Insufficiency) or VT (Tachycardia, Ventricular), LLMs tend to either choose a 'None' option from the list of candidates or erroneously select an incorrect option. Detailed cases could be checked in the appendix. Therefore, we believe it's still valuable to supply LLMs with abbreviation dictionary information to improve accuracy in more infrequent cases. §.§.§ Disease subtypes The task of linking disease subtype concepts presents a significant challenge in the biomedical field. These disease subtypes often share a lot of similarities, particularly for rare diseases in ORPHA. Even human experts might need some time to gather information to discern the differences between these rare disease subtypes. GPT-4 is capable of understanding common disease subtypes, such as different types of diabetes, with ease. However, LLama tends to struggle with identifying these common disease subtypes. Regarding rare disease subtypes, GPT-4 can comprehend most of them when provided with appropriate descriptions. In contrast, LLama fails in most cases involving rare diseases subtypes. In situations where GPT-4 failed, rare disease subtypes constitute a significant portion. For instance, "Dentinogenesis imperfecta, shields iia 3" in OMIM corresponds to "Dentinogenesis imperfecta type 3". However, GPT chose "Dentinogenesis imperfecta type 2". Generally, GPT-4 can provide the correct answer for diseases labeled with "type n". However, it does occasionally falter in a few of these cases. §.§.§ Process Correctness The rationale behind an LLM ranking a candidate first is critical. To construct precise and trustworthy AI in the biomedical domain, we aim for both the prediction and the process to be accurate. We noticed from the LLama results that there are instances where the process was incorrect, yet the final answer was right. For example, in case 9 from the appendix, LLama provided the correct prediction, yet the reasoning appeared to be based on shared keywords between disease concept names. This is not ideal, especially for rare disease concept linking, where many concepts share keywords yet refer to different diseases. GPT-4 exhibits a more accurate and consistent reasoning process than LLaMa, which sometimes even outputs code (as seen in case 8 in the appendix), indicating that the LLama model we utilized may not be well-aligned for this task. Although we are not medical experts and cannot offer an accurate assessment of LLM's process correctness, the process correctness of GPT-4 is generally satisfactory when given the correct concept description. Most of the time, the process is associated with the context we provided, further emphasizing the importance of using LLM's ICL. By qualitatively evaluating the process correctness of LLM, we enhance the interpretability of using large models in concept linking tasks. § LIMITATIONS AND DISCUSSION While our framework holds promise, it also comes with notable limitations. Primarily, the inference speed of LLMs is exceedingly slow, making the process expensive and long. For instance, our experimental setup involving GPT4 inference on 3700 OMIM-ORDO pairs costs approximately $150 USD. When employing locally deployable LLMs, such as LLama 13B, the inference speed is roughly 103 ms/token, processing only a few words per second. Larger models like Galantica and LLama 65B are even slower, handling only about one word per second with cpu. In light of these constraints, future research could explore fine-tuning (without supervision) a LLM specifically designed for this task with low resources<cit.>. Considering the rigorous hardware requirements, our framework's accessibility is rather constrained. This is further exacerbated by GPT's closed-source nature, leading to diminished transparency. Moreover, even when using open-source LLama for inference, powerful GPUs or large amounts of RAM are required - resources that most researchers and potential users in the biomedical domain do not have readily available. Training and compressing a quantified LLM<cit.> for the biomedical domain is also beneficial in future work. Furthermore, our framework sometimes exhibits unexpected failures. For instance, when two share the same name, our framework may fail to provide the correct answer. This could be due to context embedding - if the accurately labeled name is contextually farther than other candidates, the correct option may not appear. Moreover, this framework also exhibits frequent shortcomings in handling abbreviations. Both of these issues could potentially be mitigated by utilizing dictionaries, suggesting that a hybrid system might be an avenue worth exploring for future concept linking tasks. § CONCLUSION In conclusion, this research explores the use of the in-context learning capabilities of large language models for biomedical concept linking. Our proposed two-stage framework effectively retrieves and ranks biomedical concepts, achieving competitive results without needing any training. unsrtnat § APPENDIX §.§ Implementation details Techniques such as Chain of Thoughts and Self-Consistency are frequently employed in prompt engineering. We perform a one-shot learning in the ablation test, similar to the application of Chain of Thoughts. However, we do not use the Self-Consistency method in this paper. Here, we would like to discuss the necessity of these techniques. The primary notion of Chain of Thoughts is that by presenting an analogy and requiring the LLM to recount the original process, more computational power is used during the inference, leading to better performance. Self-Consistency, on the other hand, requires even more computational power, as it prompts the model to reason in various ways, ensuring that the results are internally consistent. In the context of our ICL definition, we already include substantial text information in the prompt (a full example can be found in the appendix), which is lengthy and necessitates significant LLM inference time. Furthermore, we consider concept linking as a fundamental, high-usage application. Implementing Self-Consistency would at least triple the inference cost, and given the current high costs and slow speed of inference and Self-Consistency are design of LLM reasoning task, we believe it is not advantageous to use Self-Consistency for such a basic task. §.§ abblation test Table  <ref>demonstrates the performance of different language models in the candidate generation or retrieval stage during the OMIM-ORDO ontology matching. Across all three types of large language models, incorporating concept descriptions from ontologies in the embedding process generally improves performance. It's worth noting that SapBERT's hits@10 is less than 3% compared to GPT-3, which suggests that despite being more cost-efficient and faster, Bert-based embedding methods still have relevance and utility in future. On the other hand, Llama embeddings were significantly less effective, which might be due to the fact that the model wasn't specifically fine-tuned for use in embedding. SapBERT employs contrastive learning for its embedding model, and it's plausible that OpenAI employs a similar method for their models. Future research could explore using open-source LLMs for embeddings, as this could potentially yield better performance than Bert-based models. An experiment involving OMIM context information, ORDO information, and a one-shot example was not conducted, as the resulting prompt would contain too many tokens, making the experiment both time-consuming and costly. Given the minimal improvement observed when adding one-shot learning to the OMIM context, it is likely that any further gains from this additional experiment would be minor. §.§ error cases §.§ Reasoning process cases Case1: bleeding disorder, platelet-type, 8 || Bleeding disorder due to P2Y12 defect GPT4: The same concept to "bleeding disorder, platelet-type, 8" is option 0: Bleeding disorder due to P2Y12 defect. This is because both terms refer to a bleeding disorder caused by a defect in the platelet P2Y12 receptor, which results in impaired platelet responses to adenosine diphosphate and leads to mild to moderate bleeding diathesis. LLaMa: Bleeding disorder due to P2Y12 defect (option 0) and bleeding diathesis due to integrin alpha2-beta1 deficiency (option 1) are similar concepts because they both refer to platelet function disorders that result in increased risk of bleeding. However, the underlying causes differ. In the case of P2Y12 defect, the platelets do not respond properly to the ADP released from damaged blood vessels. This leads to a failure of platelet aggregation and impaired hemostasis. In contrast, integrin alpha2-beta1 deficiency refers to a genetic disorder in which there is a defect in the expression or function of the alpha2beta1 integrin receptor on the surface of platelets. This leads to an increased risk of bleeding due to impaired platelet aggregation and adhesion. Case2: cortical dysplasia, complex, with other brain malformations 7 || Polymicrogyria due to TUBB2B mutation GPT4: The same concept to "cortical dysplasia, complex, with other brain malformations 7" is 0: Polymicrogyria due to TUBB2B mutation. This is because it is characterized by generalized or focal dysgyria (also named polymicrogryia-like cortical dysplasia) or alternatively by microlissencephaly with dysmorphic basal ganglia and dysgenesis of the corpus callosum, which are similar features to those described in CDCBM7. Additionally, the clinical manifestations, such as microcephaly, seizures, hypotonia, developmental delay, severe psychomotor delay, ataxia, spastic diplegia or tetraplegia, and ocular abnormalities, are also similar to those found in CDCBM7. LLaMa: The "cortical dysplasia, complex, with other brain malformations 7" is one of the several phenotypes of cortical dysplasias. It describes a specific set of anatomical abnormalities in the cortex and associated brain structures such as polymicrogyria, symmetric or asymmetric. The options that you listed are different conditions that may cause similar neuroanatomical patterns but do not specifically refer to the cortical dysplasia, complex, with other brain malformations 7 phenotype. Polymicrogyria due to TUBB2B mutation is a genetic condition caused by a mutation in the TUBB2B gene that leads to the formation of excessive folds on the surface of the brain (polymicrogyria). Megalencephaly-capillary malformation-polymicrogyria syndrome is a rare disorder characterized by an abnormally large brain (megalencephaly), capillary malformations, and polymicrogyria. Case3: ceroid lipofuscinosis, neuronal, 8, northern epilepsy variant || Progressive epilepsy-intellectual disability syndrome, Finnish type GPT4: The same concept to "ceroid lipofuscinosis, neuronal, 8, northern epilepsy variant" is option 1: Progressive epilepsy-intellectual disability syndrome, Finnish type. This is because it is also known as Northern epilepsy, which is a subtype of neuronal ceroid lipofuscinosis (NCL) characterized by seizures, progressive decline of intellectual capacities, and variable loss of vision. LLaMa: We are looking for a diagnosis of "ceroid lipofuscinosis, neuronal, 8, northern epilepsy variant." It is a rare and progressive genetic disorder characterized by worsening seizures, intellectual disability, and vision loss. The condition is caused by mutations in the CLN8 gene, which provides instructions for making a protein called Hydrolase CNPase that is essential for the breakdown of a specific type of fat (lipofuscin) in the brain and other tissues. When the protein is defective or absent, there is an accumulation of lipofuscin in the brain's neurons, which leads to their death and impairs normal brain function. One of the conditions that shares some similar features with "ceroid lipofuscinosis, neuronal, 8, northern epilepsy variant" is "progressive epilepsy-intellectual disability syndrome, Finnish type," which is also a rare and progressive genetic disorder characterized by worsening seizures, intellectual disability, and vision loss. However, this condition is caused by mutations in Progressive epilepsy-intellectual disability syndrome. Case4: neurodevelopmental disorder with microcephaly and gray sclerae || Severe growth deficiency-strabismus-extensive dermal melanocytosis-intellectual disability syndrome GPT4: The same concept to "neurodevelopmental disorder with microcephaly and gray sclerae" is option 2: Severe growth deficiency-strabismus-extensive dermal melanocytosis-intellectual disability syndrome. This is because it also involves microcephaly, intellectual disability, and gray sclerae (blue-gray sclerae) as part of its clinical features. LLaMa: This question can be answered by a medical professional, such as a doctor or nurse practitioner. It cannot be answered by an AI language model like me, Open Assistant. If you have any concerns about your health, it's always best to consult with a medical professional. Case5: foveal hypoplasia 2 || Foveal hypoplasia-optic nerve decussation defect-anterior segment dysgenesis syndrome GPT4: The same concept to "foveal hypoplasia 2" is option 0: Foveal hypoplasia-optic nerve decussation defect-anterior segment dysgenesis syndrome. This is because both descriptions mention foveal hypoplasia, optic nerve misrouting or decussation defects, and anterior segment dysgenesis, all in the absence of albinism. Additionally, both conditions present with congenital nystagmus, decreased visual acuity, and refractive errors. truth: Foveal hypoplasia-optic nerve decussation defect-anterior segment dysgenesis syndrome LLaMa: It depends on your perspective, but I would say that all of these options could be considered the same concept as "foveal hypoplasia 2" since they all have some common symptoms and characteristics. For example: Option 0 mentions "anterior segment dysgenesis," which is a type of eye abnormality that can affect the development of the cornea, iris, or lens. This is similar to "anterior segment dysgenesis" mentioned in your original description of "foveal hypoplasia 2." Option 1 mentions "optic nerve hypoplasia/aplasia," which means that the optic nerve is underdeveloped or missing entirely. This is similar to the "optic nerve decussation defects" mentioned in your original description of "foveal hypoplasia 2." Option 2 mentions "microphthalmia-retinitis pigmentosa-foveoschisis-optic disc drusen syndrome," which is a rare genetic disorder that can cause small eyes (microphthalmia), a breakdown of the retina (retinitis p Foveal hypoplasia-optic nerve decussation defect-anterior segment dysgenesis syndrome (cut off becasue of max output limit) Case6: hypermanganesemia with dystonia 2 || Dystonia-parkinsonism-hypermanganesemia syndrome GPT4: The same concept to "hypermanganesemia with dystonia 2" is option 2: Cirrhosis-dystonia-polycythemia-hypermanganesemia syndrome. This is because it is also a rare disorder of manganese transport characterized by the presence of dystonia and hypermanganesemia, along with other symptoms such as liver cirrhosis and polycythemia. The brain imaging findings in this syndrome are also consistent with the deposition of manganese in the basal ganglia and other regions, similar to "hypermanganesemia with dystonia 2". LLama: I'm sorry, but none of those options are exactly the same as "hypermanganesemia with dystonia 2." However, one of them does come close. Option 0, Dystonia-parkinsonism-hypermanganesemia syndrome, also involves hypermanganesemia and dystonia, although it may have other symptoms as well. The main difference between "hypermanganesemia with dystonia 2" and Dystonia-parkinsonism-hypermanganesemia syndrome is that the former seems to be a specific type of the latter. In other words, all people with "hypermanganesemia with dystonia 2" would have Dystonia-parkinsonism-hypermanganesemia syndrome, but not everyone with Dystonia-parkinsonism-hypermanganesemia syndrome would have "hypermanganesemia with dystonia 2." I hope that helps! Let me know if you have any other questions. Case7: epiphyseal dysplasia, multiple, with miniepiphyses || Multiple epiphyseal dysplasia, with miniepiphyses GPT4: The same concept to "epiphyseal dysplasia, multiple, with miniepiphyses" is option 0: Multiple epiphyseal dysplasia, with miniepiphyses. This is because it describes the same disorder characterized by strikingly small secondary ossification centers (mini-epiphyses) in all or only some joints, resulting in severe bone dysplasia of the proximal femoral heads, along with short stature, increased lumbar lordosis, genua vara, and generalized joint laxity. LLaMa: The correct answer is option number one: Multiple epiphyseal dysplasia, with miniepiphyses. Multiple epiphyseal dysplasia (MED) is a group of rare skeletal disorders characterized by abnormalities in the growth and development of the bones and joints. MED can be divided into two main types based on their genetic cause: autosomal dominant MED, which is caused by mutations in the COMP gene, and recessive MED, which is caused by mutations in one of several other genes. One of the distinguishing features of MED is the presence of miniepiphyses, which are small bony projections that develop at the ends of long bones instead of the normal single growth plate (epiphysis). These miniepiphyses can cause abnormalities in the shape and alignment of the bones, leading to joint pain, limb deformity, and early onset of osteoarthritis. The term "multiple epiphyseal dysplasia, with miniepiphyses" is used specifically to describe a form of Multiple epiphyseal dysplasia, with miniepiphyses(cut off becasue of max output limit) Case8: myopathy, myofibrillar, 3 || Distal myotilinopathy GPT4: The same concept to "myopathy, myofibrillar, 3" is 1: Myofibrillar myopathy. This is because both terms refer to a group of skeletal and cardiac muscle disorders characterized by the disintegration of myofibrils and aggregation of degradation products into intracellular inclusions. They also share similar clinical features, such as slowly-progressive muscle weakness that can affect proximal, distal, cardiac, and respiratory muscles. LLaMa: code Case9: 2-methylbutyryl-coa dehydrogenase deficiency || 2-methylbutyryl-CoA dehydrogenase deficiency GPT4: The same concept to "2-methylbutyryl-CoA dehydrogenase deficiency" is option 0: 2-methylbutyryl-CoA dehydrogenase deficiency. This is because it has the same name and refers to the same condition, which is characterized by impaired isoleucine degradation with increased plasma or whole blood C5 acylcarnitine levels and increased urinary excretion of N-methylbutyrylglycine. The condition is usually clinically asymptomatic, but some patients may experience muscular hypotonia, developmental delay, and seizures. LLaMa: option 0.It's because they have one or more related keywords in common. Here are the keywords of 2-methylbutyryl-CoA dehydrogenase deficiency: - "2-methylbutyryl glycinuria" - "short/branched-chain acyl-coa dehydrogenase deficiency" And here are the keywords of Isobutyryl-CoA dehydrogenase deficiency: - "2-methylbutyryl glycinuria" - "short/branched-chain acyl-coa dehydrogenase deficiency" As you can see, they have the same keywords in common. So it's reasonable to say that they are related. The other options don't have these specific keywords, so they are less likely to be related.
http://arxiv.org/abs/2307.01444v2
20230704024255
Static Background Removal in Vehicular Radar: Filtering in Azimuth-Elevation-Doppler Domain
[ "Xiangyu Gao", "Sumit Roy", "Lyutianyang Zhang" ]
eess.SP
[ "eess.SP" ]
Static Background Removal in Vehicular Radar: Filtering in Azimuth-Elevation-Doppler Domain Xiangyu Gao, Student Member, IEEE, Sumit Roy, Fellow, IEEE, Lyutianyang Zhang X. Gao, S. Roy, and L. Zhang are with the Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, 98195, USA. E-mail: [email protected], [email protected], [email protected]. August 1, 2023 ===================================================================================================================================================================================================================================================================================== Anti-collision assistance (as part of the current push towards increasing vehicular autonomy) critically depends on accurate detection/localization of moving targets in vicinity. An effective solution pathway involves removing background or static objects from the scene, so as to enhance the detection/localization of moving targets as a key component for improving overall system performance. In this paper, we present an efficient algorithm for background removal for automotive scenarios, applicable to commodity frequency-modulated continuous wave (FMCW)-based radars. Our proposed algorithm follows a three-step approach: a) preprocessing of back-scattered received radar signal for 4-dimensional (4D) point clouds generation, b) 3-dimensional (3D) radar ego-motion estimation, and c) notch filter-based background removal in the azimuth-elevation-Doppler domain. To begin, we model the received signal corresponding to multiple-input multiple-output (MIMO) FMCW transmissions and develop a signal processing framework for extracting 4D point clouds. Subsequently, we introduce a robust 3D ego-motion estimation algorithm that accurately estimates source radar velocity, accounting for measurement errors and Doppler ambiguity, by processing the point clouds. Additionally, our algorithm leverages the relationship between Doppler velocity, azimuth angle, elevation angle, and radar ego-motion velocity to identify the background clutter spectrum and employ notch filters for its removal. The performance of our algorithm is evaluated using both simulated data and experiments with real-world data. By offering a fast and computationally efficient solution, our approach contributes to a potential pathway for challenges posed by non-homogeneous environments and real-time processing requirements. background removal, MTI, automotive radar, vehicle, FMCW, MIMO, azimuth-elevation-Doppler, filtering. § INTRODUCTION Autonomous driving systems that rely on multi-sensor fusion and scene perception are key to achieving future L4 and L5-level vehicular automation <cit.>. However, image understanding in sophisticated environments such as city roads with dense multi-purpose traffic, remains a significant current bottleneck. In these scenarios which contain both moving and static objects/backgrounds, the removal of static objects is a well-known approach to enhance moving target indication (MTI) <cit.>. MTI is a mature technology derived from air-borne radar systems <cit.> that has been used to detect and track moving targets while filtering out clutter - the unwanted echoes from the background and other stationary objects. To improve MTI in air-to-air and air-to-ground surveillance scenarios in the presence of clutter and other interference, space-time adaptive processing (STAP) is a long-established signal processing technique <cit.> that utilizes 2-dimensional joint adaptive filtering in the spatial and temporal domains to maximize the signal-to-interference ratio (SIR) <cit.>. Optimal filters for STAP-based MTI generally require prior knowledge of clutter statistics of the cell under test <cit.>. Adaptive methods such as sample matrix inversion (SMI) iteratively estimate the true interference covariance matrix based on received sample inputs <cit.>. It has been shown that SMI requires twice the degree of freedom (DOF) training samples to ensure adequate SIR and requires training samples containing only clutter that share the same statistics as the clutter in cell-under-test (i.e., implicitly assume homogeneous environments), and are independent and identically distributed (IID) <cit.>. To address the challenges of acquiring training data volume and computation complexity in computing the clutter covariance matrix for real-time operations, various pre and post-Doppler STAP methods have been developed <cit.>. These explore the trade-off between sub-optimal filtering with lower complexity and the resulting degradation of clutter suppression performance. §.§ Prior Art: STAP Approaches The displaced phase center antenna (DPCA) is a deterministic STAP method <cit.> suited for synthetic aperture radar (SAR) scenarios, that requires specific conditions to hold. It relies on identical channels between TX locations and target, with antenna phase centers displaced precisely along the flight axis, and a pulse repetition frequency that matches the platform velocity <cit.>. By utilizing a shifted array aperture to compensate for platform motion, DPCA removes clutter through the subtraction of returns corresponding to consecutive pulses. However, meeting these requirements in practice can be challenging due to channel or phase errors. In addition to DPCA, there are other STAP algorithms that aim to reduce computation by reducing the total DOF (product of spatial and temporal DOF). Among them, post-Doppler methods efficiently reduce the temporal DOF by performing Doppler processing before applying STAP <cit.>. One such method is the adjacent-bin post-Doppler STAP <cit.>, which adaptively processes the spatial returns from adjacent Doppler bins. This approach divides the N_cN_h-DOF space-time filtering problem into N_c separate N_hL-DOF space-time filtering problems, where N_c, N_h, L are dimensions of time, space, and the number of adjacent Doppler bins (L ≪ N_c), respectively [We assume Doppler processing/beamforming does not change the size of the time/space dimension in the DOF analysis.]. Another technique is joint domain localized STAP <cit.>, which incorporates additional beamforming in the spatial domain and applies STAP on adjacent Doppler and angle bins. This method divides the N_c N_h-DOF space-time filtering problem into N_cN_h separate HL-DOF space-time filtering problems, where H is the number of adjacent angle bins selected (H ≪ N_h), and we assume the beamforming does not change the spatial dimension. A systematic discussion of various STAP methods in terms of their processing procedures, assumptions, resulting DOF, and pros & cons are presented in Table. <ref>. The STAP methods have also found widespread application in ground-penetrating and through-the-wall radars, where background removal is commonly referred to as the process of removing ground-reflected clutter and walls respectively to reveal masked targets <cit.>. Various techniques have been employed including coherent background subtraction, mean subtraction <cit.>, frame differencing <cit.>, and component selection using singular-value-decomposition (SVD) <cit.> and principal component analysis (PCA). Coherent background subtraction assumes knowledge of the wall characteristics and requires access to a reference scene, which can be challenging to obtain in automotive applications. On the other hand, frame differencing and mean subtraction take advantage of the clutter's time and angle invariance. These methods calculate the clutter signal by averaging over time and aspect angle to mitigate its effects. These revisions aim to enhance clarity, conciseness, and cohesiveness while preserving the original content and reference format. Recent advancements in deep learning (DL) have also contributed to the field, providing new ideas for radar image processing in automotive radars <cit.>, as well as clutter suppression in airborne radars <cit.>. In <cit.>, a novel DL framework is proposed to suppress clutter in the angle-Doppler spectrum, addressing the challenges of insufficient training samples and low detection probability in non-homogeneous clutter environments. §.§ Issues with Existing Methods The above `traditional' methods are not readily applicable to vehicular radars for several reasons. First, unlike the air-borne scenario, the automotive environment is non-homogeneous, meaning that the type of objects or background can vary across the field of view. This poses a challenge for STAP as it may not have enough qualified training samples that are IID (i.e., twice the system's DOF from adjacent range bins), resulting in significant performance loss. Second, the motion of the source vehicle leads to rapidly changing clutter statistics for both the ground and stationary targets (trees, buildings etc.). Therefore, the clutter removal algorithms designed for ground-penetrating radar or through-the-wall radar are not directly applicable to vehicular radar. Third, vehicular radars require real-time processing for environment perception implying that algorithms with heavy computational requirements such as STAP, cannot be directly deployed in-situ on vehicles and will need modifications. §.§ Contributions Recent advancements in 77GHz FMCW radars have demonstrated highly accurate object detection and localization capabilities, regardless of environmental conditions <cit.>. In particular, these radars excel in fine-resolution Doppler velocity measurement <cit.>, presenting an opportunity for background removal in automotive scenarios. The Doppler velocities of static objects are determined by their azimuth and elevation angles as well as the radar's instantaneous velocity and direction of motion <cit.>. This allows for the localization of background objects by identifying their “specific Doppler velocity profile" and removal via filtering in the azimuth-elevation-Doppler domain. This approach offers several advantages over temporal and spatial domain filtering methods when dealing with the challenges posed by automotive scenarios. It does not rely on any training data and is applicable to both moving and stationary radar; finally (and perhaps critically) it operates efficiently without the need for heavy STAP-like covariance matrix computations. While our proposed method also applies Doppler processing and beamforming prior to background removal, there is a fundamental difference vis-a-vis classic STAP methods. STAP methods estimate the clutter covariance matrix from space-time samples to obtain the filter weights that maximize the SIR for specific Doppler-angle pair. The Doppler processing and beamforming steps in post-Doppler STAPs concentrate the clutter in a reduced number of Doppler/angle bins, effectively reducing the DOF for clutter covariance estimation. In contrast, our proposed method directly suppresses background clutter by identifying the Doppler frequency corresponding to each azimuth-elevation angle and applying notch filtering to remove the clutter components from the input. The fine Doppler velocity resolution of vehicular radar gives more accurate Doppler estimation than before, which makes it possible to locate and notch the clutters from the Doppler domain without applying STAP. Thus, the proposed method avoids the computation of the clutter covariance matrix, allowing for faster and more efficient processing. In this paper, the presented background removal algorithm for FMCW-based MIMO radar consists of a radar signal preprocessing step followed by 3D ego-motion estimation, and notch filter-based background removal in the azimuth-elevation-Doppler domain. The contributions of our work can be summarized as follows: * We develop a model for the received signal of FMCW MIMO radar with time-division multiplexing specifically for point targets in automotive scenarios. Using this model, we propose a signal processing framework that extracts 4D point clouds from the raw radar signal. * We propose a 3D ego-motion estimation algorithm that takes radar point clouds as input and accurately estimates the radar ego velocity along the x, y, z axes. Our algorithm incorporates the consideration of measurement error and tackles the issue of Doppler ambiguity that can arise in massive MIMO systems, offering a robust solution. * We propose a background removal algorithm that identifies Doppler frequencies corresponding to the background via the “specific Doppler velocity profile” and removes the background via notch filtering in azimuth-elevation-Doppler domain. * To evaluate algorithm performance, we conduct tests using simulated data from the MATLAB Automated Driving toolbox. Additionally, we validate our approach through extensive experiments with real-world data collected from Texas Instrument (TI)'s cascaded-chip radar board under practical driving scenarios. The paper is organized as follows: Section II reviews the related works in the field. Section III presents the signal model for FMCW MIMO radar, laying the foundation for the proposed algorithm. Section IV provides a detailed explanation of the three main steps of the algorithm. In Section V, simulation results are presented. Section VI focuses on experimental results and analysis. Section VII concludes the paper with a summary of findings. § SIGNAL MODEL In this section, we model one frame of the FMCW radar return signal for a general point target in autonomous scenarios. We assume that each radar TX transmits a sequence of N_c chirps with duration T_c in a frame. With carrier frequency f_c and chirp slope S_w, amplitude A_T, and initial phase ϕ_0, the transmit signal of the FMCW radar during time t within a frame is given by s_T (t)=A_Tcos(2π(f_c t+1/2S_w t^2 ) + ϕ_0 ). For target at distance r from radar, the received reflected signal s_R(t) incurs round-trip delay τ=2r/c_0, i.e., s_R(t)= A_R/A_T s_T (t-τ), where c_0 is the speed of light and A_R is the received signal amplitude. The received signal is mixed with the transmit signal at the receiver to produce the difference intermediate frequency (IF) signal s_IF(t): s_IF(t) = A_T A_R/2{cos[2 π(S_wτ t+f_cτ-1/2 S_wτ^2)] } In the global coordinate system of Fig. <ref>, the vehicle-mounted radar moves from the origin at t=0 with ego velocity 𝐯⃗_c(t)=(v_x(t), v_y(t), v_z(t)) at time t. For frame-by-frame modeling, we assume that the velocities of the radar and any targets may be assumed constant over a frame duration (typically milliseconds). Therefore, we simplify the time-dependant variables 𝐯⃗_c(t) by 𝐯⃗_c etc. in the following analysis. Fig. <ref> shows a point target located at a range r, azimuth angle θ, and elevation angle φ (corresponding to Cartesian-coordinates location (x_0, y_0, z_0)=(rcosφsinθ, rcosφcosθ, rsinφ)), moving with velocity 𝐯⃗_a=(v_a, x, v_a, y, v_a, z) in one frame. The target exhibits a relative Doppler velocity (the velocity along the radial direction) 𝐯⃗_r with respective to the radar; the amplitude of 𝐯⃗_r (denoted by v_r) is obtained by projecting the inverse platform velocity -𝐯⃗_c and target velocity 𝐯⃗_a onto the radial direction as follows: v_r = ((v_a, y-v_y)cosθ+(v_a, x-v_x)sinθ)cosφ + (v_a, z-v_z)sinφ If 𝐯⃗_a=0 (target is stationary), the relative Doppler velocity amplitude is represented by: v_r = -(v_ycosθ+v_xsinθ)cosφ -v_zsinφ FMCW radar sends a sequence of chirps within a frame, and the round-trip delay at each chirp varies (slightly) due to relative motion. By decomposing the t into fast time t_f (time within a chirp) and slow time n (the chirp index, i.e. time between chirps), i.e., t=nT_c+t_f, we can represent the round-trip delay for the target at n-th chirp τ_n using its Doppler velocity v_r and the round-trip delay for the first chirp τ_0=2√(x_0^2+y_0^2+z_0^2)/c_0: τ_n = τ_0 - 2 v_r n T_c/c_0 Substituting Eq. (<ref>) into Eq. (<ref>) and then sampling Eq. (<ref>) in fast time t_f with frequency f_s, we can get the (post analog-to-digital conversion (ADC)) sampled IF signal with chirp index n and fast-time sample index m: s_IF(m,n)=A_T A_R/2exp(j 2 π(S_wτ_n m/f_s + f_cτ_n - 1/2 S_wτ_n^2)) Under the time division multiplexed (TDM)-MIMO setup depicted in Fig. <ref>, we consider a radar system consisting of N_T transmitters (TXs) and N_R receivers (RXs), where TXs and RXs are arranged vertically and horizontally, separated by identical separation h. The TDM-MIMO scheme ensures the orthogonality of the transmit signals by sequentially transmitting chirps from each TX and allows recovery of the individual transmitted signals at the receive array. The measurements at the physical receive array corresponding to each orthogonal TX waveform can then be stacked as a matrix to effectively create a `virtual' array <cit.>. For the setup in Fig. <ref>, the measurement at the (p,q)-th virtual array location is the signal transmitted from q-th TX and received by p-th RX, where p∈{1, 2, …, N_R} and q∈{1, 2, …, N_T}. Thus TDM-MIMO operation results in a 2D array of N_TN_R measurements, where the corresponding virtual MIMO antenna locations result from vector superposition of the TX and RX antenna locations <cit.>. From array theory <cit.>, for a far-field target with azimuth angle θ and elevation angle φ, the IF signal at the (p,q)-th virtual array location contains the additional phase terms induced by the target's angles: 2π p hcosθ / λ and 2π q hsinφ /λ. In the TDM-MIMO setup, the duration of transmitting one chirp by N_T TXs is extended to N_T T_c (which differs from the chirp duration T_c in the single TX case), resulting in a new round-trip delay τ^'_n for the target during the n-th chirp: τ_n^' = τ_0 - 2 v_r n N_T T_c/c_0 In summary, the IF signal for chirp n, fast-time sample m, and (p,q)-th virtual antenna in TDM-MIMO setup is given by: s_IF(m, n, p, q)=A_T A_R/2exp(j 2 π(S_wτ_n^'m/f_s + f_cτ_n^' - 1/2 S_wτ_n^'^2 + p hcosθ/λ + q hsinφ/λ)) § SYSTEM DESIGN: BACKGROUND REMOVAL The designed system for background removal in radar images is operated in a frame-by-frame manner (i.e., the data for each frame is processed separately), as shown in Fig. <ref>. It starts by processing the radar I-Q data s_IF(m, n, p, q) of one frame and extracting the point clouds with (r, v_r, θ, φ) values as shown in Fig. <ref>. The extracted point clouds are used as input for 3D radar ego-motion estimation to solve the ego velocity (v_x,v_y,v_z) for that frame. To account for the presence of both stationary and moving targets in the point clouds, we incorporate the random sample consensus (RANSAC) <cit.> into the ego velocity estimation to separate the point clouds of stationary targets. Moreover, the proposed 3D ego-motion model takes into consideration the Doppler ambiguity and the measurement errors of Dopplers and angles, enhancing the robustness and accuracy of the estimation. Once the radar ego velocity (v_x,v_y,v_z) is determined, a background removal algorithm that identifies Doppler frequencies corresponding to the background via the “specific Doppler velocity profile” and removes the background via notch filtering in the azimuth-elevation-Doppler domain is implemented on the radar images. In the following, we introduce the three main components of the system design: radar point cloud extraction, 3D radar ego-motion estimation, and static background removal. §.§ Radar Point Cloud Extraction The point cloud extraction process is accomplished using basic fast Fourier transform (FFT) processing and cell-averaging constant-false-alarm-rate (CA-CFAR) detection techniques. The detailed workflow is illustrated in Fig. <ref>, where the input to the workflow is the ADC I-Q data, and the output is a collection of detections represented by their (r, v_r, θ, φ) values that constitute `point clouds'. The signal processing architecture is based on the approach presented in our prior work <cit.>, with an additional step for elevation angle estimation. The first step estimates the range and Doppler velocity via two sequential FFTs - Range FFT and Doppler FFT - as shown<cit.>. The Range FFT is performed across the samples in the fast time domain to extract the round-time delay term S_wτ^'_n /f_s in Eq. (<ref>), and the Doppler FFT processes the slow time (chirps) to extract the Doppler-induced phase term -4π f_cv_rN_TT_c/c_0 in Eq. (<ref>) for each range bin. The resulting Range-Doppler (RD) map is then processed by the CA-CFAR algorithm <cit.> to detect peaks and obtain their ranges and Doppler velocities (r, v_r). During the CA-CFAR detection process, each cell in the RD map is evaluated for presence or absence of a target using a threshold, where the threshold is set according to the noise power estimate within the training cells. Thereafter, the azimuth and elevation angles (θ, φ) are estimated for each detection at (r, v_r) via an angle of arrival (AoA) estimator. While there are many approaches to AoA estimation, we adopt the FFT-based (conventional) AoA estimator. For the specific range-Doppler cell corresponding to (r, v_r), we use FFTs to process the measurements of the virtual array (shown in Fig. <ref>) along the horizontal or vertical direction to extract the phase term 2π hcosθ/λ or 2π hsinφ/λ in Eq. (<ref>), thus obtaining estimates for (θ, φ). For non-stationary targets, the motion-induced phase errors at each virtual antenna location should be compensated for different TXs before the AoA FFT <cit.>. In general, for a uniform linear transmit array with N_T TXs, the received phases at the virtual elements corresponding to the i-th TX are corrected via rotation of (i-1) Δϕ_v / N_T, where Δϕ_v = -4π f_cv_rN_TT_c/c_0 from prior Doppler estimation <cit.>. §.§ 3D Radar Ego-motion Estimation We assume that there are N_total detections (indexed by i) in the extracted point clouds, identified by (r_i, v_r,i,θ_i, φ_i), a sub-set of which belong to class of stationary targets, which is initially unknown. Representing the stationary subset size by N, from Eq. (<ref>), the relationship between the Doppler velocities and angles of N stationary targets (i=1,2,…, N) can be expressed as follows: =1.4pt .99![ v_r,1 ⋮ v_r,N ] =-[sinθ_1cosφ_1 cosθ_1cosφ_1 sinφ_1 ⋮ ⋮ ⋮ sinθ_Ncosφ_N cosθ_Ncosφ_N sinφ_N ] [ v_x v_y v_z ] For more accurate modeling, we assume errors are associated with azimuth angle measurement θ_i, elevation angle measurement φ_i, and Doppler velocity measurement v_r, i. Denoting the ground-truth values for the azimuth angle, elevation angle, and Doppler velocity by Θ_i, Φ_i, and V_r,i, the measurement errors given by Θ_i-θ_i, Φ_i - φ and V_r,i-v_r,i, respectively, are assumed to follow zero-mean Gaussian distributions <cit.> with no mutual coupling. We next build an orthogonal distance regression (ODR) solver for Eq. (<ref>). that estimates(v_x, v_y, v_z) as well as (Θ_i, Φ_i, V_i, i=1,…, N) based on the measurement error model above. For simplicity, we denote the vector of (v_x, v_y, v_z) by 𝐯_c, the set {Θ_i, i=1, …, N} by Θ, the set {Φ_i, i=1, …, N} by Φ, and the set {V_i, i=1, …, N} by 𝐕 in the following. The objective function and constraint for the ODR estimator are given by <cit.>: 𝐯_c,Θ,Φ, 𝐕arg min ∑_i=1^N η^2_1(Θ_i-θ_i)^2 + η^2_2(Φ_i-φ_i)^2 + (V_i-v_r, i)^2 s.t. V_i= - sinΘ_icosΦ_iv_x-cosΘ_icosΦ_iv_y - sinΦ_iv_z, ∀ i=1, …, N where η^2_1, η^2_2 are the relative weights for the first two terms (assuming the weight for the third term is 1). η^2_1, η^2_2 are determined by the ratios of the variances of the Doppler velocity measurement errors (σ^2_v), azimuth angle measurement errors (σ^2_θ), and elevation angle measurement errors (σ^2_φ), as follows: η^2_1 = σ^2_v/σ^2_θ, η^2_2 = σ^2_v/σ^2_φ. In practice, we determine the variance of measurement errors based on the theoretical resolution and substitute it into Eq. (<ref>) to calculate the weights η^2_1, η^2_2. The constraint in Eq. (<ref>) can be utilized to eliminate the term of (V_i-v_r, i)^2 from the objective function (thus also eliminate unknowns 𝐕) by substitution, thereby transforming the original constrained optimization problem in Eq. (<ref>) into an unconstrained minimization problem <cit.>, as follows: 𝐯_c, Θ,Φarg min ∑_i=1^N η^2_1 (Θ_i-θ_i)^2 + η^2_2 (Φ_i-φ_i)^2 + [sinΘ_icosΦ_iv_x +cosΘ_icosΦ_iv_y +sinΦ_iv_z+v_r, i]^2 The new optimization problem has unknowns 𝐯_c, Θ,Φ and aims to minimize the sum of squares of the orthogonal distances of measurements to the fitting hyperplane. To solve this, we propose an algorithm consisting of three steps, summarized in Algorithm <ref> outlined below. Step 1. Obtain an initial solution for 𝐯_c, Θ,Φ: The unknowns Θ,Φ is initialized using the measurement values {θ_i, i=1, …, N}, {φ_i, i=1, …, N}, and initial value of 𝐯_c is determined by solving Eq. (<ref>) via the RANSAC algorithm and least-square regression (LSR). Let 𝐏 = =1.4pt [sinθ_1cosφ_1 cosθ_1cosφ_1 sinφ_1 ⋮ ⋮ ⋮ sinθ_Ncosφ_N cosθ_Ncosφ_N sinφ_N ]and 𝐐 = [v_r,1 ⋮ v_r, N ], then the Least Squares solution to Eq. (<ref>) is 𝐯_c = (𝐏^T𝐏)^-1𝐏^T 𝐐. In automotive scenarios, the elevation angle φ_i is typically small for long-range targets and consequently, the last column of 𝐏 has values close to zero. This can lead to near- singular matrix of 𝐏^T𝐏 and numerical difficulties in computing the inverse of 𝐏^T𝐏. To address this, we adopt the Moore-Penrose pseudo-inverse denoted by (·)^† to obtain a more reliable solution, i.e.: 𝐯_c = (𝐏^T𝐏)^†𝐏^T 𝐐 where the unique pseudoinverse exists for any matrix (even singular) <cit.>. An efficient way to compute the pseudoinverse is by using SVD <cit.>. If 𝐏^T𝐏=𝐖Σ𝐔^T is the SVD of 𝐏^T𝐏, then (𝐏^T𝐏)^†=𝐔Σ^†𝐖^T. For a rectangular diagonal matrix such as Σ, its pseudoinverse Σ^† is obtained by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and transposing the matrix <cit.>. A key underlying issue when solving for (v_x, v_y, v_z) in Eq. (<ref>) is identifying or separating the subset of stationary detections from the set of all detections. For this, we use the RANSAC algorithm <cit.> on the input point clouds to find the inliers that achieve the best consensus for estimates obtained from Eq. (<ref>). Following the workflow in Fig. <ref>, RANSAC randomly selects a subset of points from the dataset and fits the model, then identifies all data points in the full set that are consistent with the model within a certain threshold - these are inliers, while the remaining points are outliers <cit.>. The above two steps are iteratively repeated for a prescribed number of trials. After all trials, the model with the highest number of inliers is selected and the corresponding inliers are output. Increasing the number of trials improves the robustness and accuracy of the identified inliers because a more accurate model estimation can be explored through random sampling. In our case, the obtained inliers represent the stationary targets, while the outliers correspond to the moving targets. The identified inliners are then plug into Eq. (<ref>) to get 𝐯_c as the initial value of 𝐯_c. Step 2. Using the initial solution, we employ the modified trust-region Levenberg-Marquardt-type (MTRLM) algorithm (from the ODRPACK library <cit.>) to solve Eq. (<ref>) and obtain an improved estimation 𝐯_c^*, Θ^*,Φ^*. The MTRLM algorithm combines the Levenberg-Marquardt method with the trust-region approach to improve the efficiency and robustness of solving nonlinear least squares problems <cit.>. After initialization, the MTRLM algorithm iterates the following steps until convergence: a) Compute the Jacobian and Hessian matrixes of the parameters, b) Determine the step size using the trust-region concept and the Jacobian and Hessian matrixes, c) Update the parameters of the model using the step size determined in the previous step, and d) Check for convergence criteria <cit.>. By following this iterative process, the MTRLM algorithm refines the initial solution and determines the best-fit solution (𝐯_c^*, Θ^*,Φ^*) to the problem in Eq. (<ref>). Step 3. After the MTRLM algorithm converges, output the optimized radar ego velocity 𝐯_c^*. §.§ Dealing with Doppler Ambiguity: Heurestics From <cit.>, with N_T TXs and a chirp duration of N_c, the maximum unambiguously measurable Doppler velocity of radar is given by: v_max = λ/4N_T T_c where N_T T_c is the pulse repetition interval in TDM-MIMO. The current 4D imaging radar trend <cit.> is towards increasing the density of MIMO arrays. Hence, as the number of TXs N_T increases, it leads to a proportional reduction in the maximum unambiguously measurable velocity v_max <cit.>, causing Doppler ambiguity whenever the true Doppler exceeds v_max. In majority of urban street driving scenarios with a maximum speed limit of 25mph (i.e., 11.2m/s) <cit.> for ego vehicle (with radar mounted), as long as the radar pulse repetition interval is smaller than 87us, Eq. (<ref>) suggests that a desired v_max (greater than 11.2m/s) is achieved, resulting in no-ambiguity measurements of stationary targets' Doppler velocities. This implies that by appropriate choice of pulse repetition interval, the Doppler ambiguity issue can be managed and the 3D ego-motion estimation method in Section <ref> applied. Nonetheless, a generalization of the proposed 3D ego-motion estimation method to include a Doppler ambiguity is necessary, and we propose a heuristic solution next. Since ambiguity implies that true Doppler velocities beyond the range [-v_max, v_max] will alias to an observed lower Doppler frequency within the range, we propose adding 2k v_max, where k∈ℤ is an integer, under the assumption that all Doppler measurements from static sources are aliased with the same k value. Hence we introduce an additive alias term 2kv_max1 (1 is a column vector of ones with size N) to the right-hand side of the Eq. (<ref>) as follows: =1.4pt .99![ v_r,1 ⋮ v_r,N ] =-[sinθ_1cosφ_1 cosθ_1cosφ_1 sinφ_1 ⋮ ⋮ ⋮ sinθ_Ncosφ_N cosθ_Ncosφ_N sinφ_N ] [ v_x v_y v_z ] + 2kv_max1 While k is unknown, but bounds on possible k values is readily determined, depending on v_max and the maximum vehicle driving speed. For example, if v_max=5m/s and the maximum vehicle driving speed is 11.2m/s (the speed limit for urban street driving), then k ∈ K = { -1, 0, or 1 }. This is because when k is -1 or 1, the maximum/minimum Doppler velocity of any stationary target can be folded into the range of [-v_max, v_max] by adding 2kv_max[k=0 represents the no-ambiguity case when the actual moving speed of radar lies within the bound.]. We propose a heuristic approach to solve the best k as well as the radar ego velocity (v_x, v_y, v_z) or 𝐯_c from Eq. (<ref>). That is, for each k ∈ K, we substitute it into Eq. (<ref>) and then apply the 3D ego-motion estimation algorithm proposed in Section <ref> to solve (v_x, v_y, v_z). We iterate all k and determine the optimal k that yields the maximum number of inliers (from the RANSAC output in step 1 of Algorithm <ref>). In other words, we choose the k value that maximizes the number of data points that satisfies Eq. (<ref>). Consequently, the best-fit radar ego velocity 𝐯_c^* would be the one corresponding to the optimal k. §.§ Static Background Removal from Radar Image in Azimuth-Elevation-Doppler Domain After obtaining the estimated radar ego-motion velocity 𝐯_c^*, we leverage this information to remove the reflection of the static background from the radar images. Radar images are generated from FFT-based spectrograms, using the techniques described in <cit.>. Initially, we assume no Doppler ambiguity and background removal is implemented via an inverse approach to Eq. (<ref>), wherein we compute the expected Doppler velocity v_r for each feasible azimuth-elevation angle pair (θ, φ) that satisfies the relationship in Eq. (<ref>). We then utilize a 3D notch filter to eliminate the Doppler component from the radar image in the Doppler domain. Notch filtering refers to removing or zeroing out specific components in the Doppler spectrum by element-wise multiplication with the frequency response of the filter. The frequency response of 3D notch filter is obtained by combining the frequency responses of three one-dimensional (1D) filters. First, we design three separate second-order infinite impulse response (IIR) notch filters <cit.> for rejecting the frequency corresponding to azimuth angle θ, elevation angle φ, and Doppler v_r. Second, the frequency response of each 1D filter is expanded to 3D by replicating itself along the other two dimensions. Third, we take the point-wise minimum among the three expanded 3D filters to find the desired frequency response for the 3D notch filter. This methodology offers a simplified yet effective means of designing the 3D notch filter for background removal in the radar image. To remove the static background, we perform element-wise multiplication between the radar image S_input and the frequency response of the 3D notch filter H(θ_i, φ_i, v_r_i) constructed above for each angle pair (θ_i, φ_i) and corresponding Doppler velocity v_r_i. Assuming there are a total of M angle pairs covering all azimuth and elevation angles, the background removed radar image S_output can be calculated as follows: S_output = S_input⊗∏_i=1^M|H(θ_i,φ_i, v_r,i)| where ⊗ represents the element-wise multiplication. Remark 1: The computation required for the background removal step can be significantly reduced by utilizing a fixed 3D notch filter that is easily adapted to different notch frequencies (θ_i,φ_i, v_r,i) by simple frequency translation. This eliminates the need to design a new filter for each specific notch frequency. Further, the frequency response of the notch filter can be limited to a narrow region around the desired notch frequency (θ_i,φ_i, v_r,i). By focusing only on a limited area, the number of operations required for the element-wise multiplication between the filter and the radar image is significantly reduced. Remark 2: If the image is desired only for a specific elevation angle (i.e. fixed φ), it is possible to greatly reduce the number of angle pairs that need to be considered during the iteration process. Consequently, instead of employing a 3D notch filter, a more efficient 2D notch filter for background removal may be employed. § SIMULATIONS In this section, we performed simulations using the MATLAB Automated Driving Toolbox to simulate a scenario involving a moving vehicle equipped with an automotive radar, in a scene comprising both background (static) and moving objects. We applied both the 3D ego-motion estimation algorithm developed here and the background removal algorithm to the simulated raw radar data in order to explore system performance. §.§ Simulated Scenario and Configuration §.§.§ Scenario We simulated a typical driving scenario involving an ego-moving vehicle along a linear road, three parked vehicles each on either side side of the road, and one moving vehicle ahead of the ego-moving vehicle, as illustrated in Fig. <ref>(a). The ego-motion car has a forward velocity of (8m/s,0, -0.5m/s) and an acceleration of (2m/s^2, 1m/s^2, 0) and is equipped with a front-view FMCW radar. The other moving car had a constant velocity of (5m/s, 0, 0). The parked cars were represented by a collection of 3D reflection points within the radar's field of view, as indicated by the red markers in Fig. <ref>(b). Using the reflection points and the signal model described by Eq. (<ref>), we generated post-demodulated ADC samples for each frame at the receiver, considering specific radar configuration described next. §.§.§ Configuration The radar configurations used for generating the ADC samples are as follows: f_c=77GHz, S_w=21.0017MHz/us, A_T=A_R=1, ϕ_0=0, f_s=4Msps, the number of samples per chirp is 128, the number of chirps per frame sent by each TX is 255. We assume the formed virtual array is planar with 8 × 8 elements and antenna distance h=λ/2, where λ is the wavelength c_0/f_c. We consider the pulse repetition interval for TX to be 60us, corresponding to a maximum unambiguous Doppler velocity of 16.5m/s, which creates no Doppler ambiguity for most of the city/town street driving cases and is referred to as the `w/o Doppler ambiguity'. For evaluation purposes, we simulated a total of 40 frames with a frame rate of 20fps. §.§ 3D Ego-motion Estimation §.§.§ Baselines The results from the ODR based method in Sec. <ref> is referred as `ODR' in the subsequent evaluation. For as assessment of the estimation accuracy of the proposed method, we used the output of RANSAC and LSR 𝐯_c (step 1 in Algorithm <ref>) as a baseline, referred to as `LSR' in the following. It is expected that the ODR performs better than the LSR since in Algorithm <ref>, the LSR estimation serves as the initialization for ODR optimization. §.§.§ Implementation We applied the ODR and LSR-based 3D radar odometry algorithm to the simulated radar ADC samples in order to estimate the ego-motion velocity for each frame, using the following parameter values: 128 range FFT points, 256 Doppler FFT points, 128 FFT points for azimuth angle estimation, 128 FFT points for elevation angle estimation, CFAR false alarm probability of 10^-2, and measurement error standard deviations of σ_θ=0.25, σ_ϕ=0.25, and σ_v=0.085. The standard deviations σ_θ, σ_ϕ, and σ_v were determined based on the theoretical resolution of azimuth angle, elevation angle, and Doppler velocity, respectively <cit.>. The 3D ego-motion estimation algorithm utilized the RANSAC algorithm with a sample size of 4, a maximum distance threshold of 0.1m/s for determining inliers, and a maximum number of trials of 2000. All methods were implemented with the same parameter values (where applicable) to ensure fair comparison. To evaluate the accuracy of ego-motion velocity 𝐯_c estimation, we compared the root mean square error (RMSE) relative to known ground truth values that were set for each frame. The results are presented in Table <ref>, where the 3 dimensions of 𝐯_c - v_x, v_y, v_z - are listed separately. §.§.§ Results and Analysis For the results presented in Table <ref>, several observations can be made. First, the ODR predictions for v_x and v_y are accurate with low RMSE (0.0619m/s and 0.0389m/s, respectively), while the prediction for v_z has a larger error of 0.3245m/s. A similar trend is observed for the LSR algorithm. This discrepancy is mainly due to the fact that the column sinφ_i in matrix 𝐏 of Eq. (<ref>) is close to zero, as most of the elevation angles φ_i are small and the estimation of v_z is more sensitive to errors. Secondly, compared to LSR, ODR achieves significant improvements in the v_y and v_z directions. This is attributed to the introduction of model for measurement errors in the ODR model and the additional refinement process in step 2 of Algorithm <ref>. §.§ 3D Ego-motion Estimation with Doppler Ambiguity While most operational cases fall into `w/o Doppler ambiguity' regime, we consider a scenario where the pulse interval for a TX is increased, leading to Doppler ambiguity. We consider a longer pulse repetition interval 180us, which corresponds to the maximum unambiguous Doppler velocity 5.5m/s. We refer to this scenario as `with Doppler ambiguity' in the subsequent evaluation. For a regular urban street driving with a speed limit of 11.2m/s, we make the set for all possible k values to be K={-1, 0, 1}. Since the heuristic algorithm proposed in Section <ref> only adds one step for estimating k to the algorithm in Section <ref>, we still use `ODR' to refer to and adopt the same baseline (i.e. `LSR') and metric in Section <ref> for comparison and evaluation. To higlight its importance, we also consider the serious impact of not tackling the Doppler ambiguity (running algorithm assuming k=0) and refer to this baseline as `LSR (k=0)' within the description in Section <ref>. From the results shown in Table <ref>, several observations can be made. First, among all three methods, ODR continues to achieve the best ego-motion estimation performance in all three dimensions: v_x, v_y, and v_z. This outcome confirms the effectiveness of the ODR approach in handling errors in the variables model. Secondly, the LSR (k=0) method performs poorly compared to LSR and ODR as it fails to account for the Doppler velocity measurements that lie outside [-v_max, v_max], resulting in a poor fit to the data in Eq. (<ref>). To illustrate it, we present the regression fitting results of the three methods in Fig. <ref>. The input radar detections are shown as blue circles, the regression plane for ODR is represented in orange, the regression plane for LSR is shown in blue, and the regression plane for LSR (k=0) is depicted in yellow. From Fig. <ref>, it is evident that the LSR (k=0) method only fits a portion of the radar input, and the overall regression plane deviates significantly from the correct regression plane (e.g., the one obtained by ODR) when Doppler ambiguity exists. §.§ Background Removal Algorithm §.§.§ Implementation We applied the proposed static background removal algorithm to the simulated data under the `w/o Doppler ambiguity' scenario. The required radar ego velocity was estimated through the ODR method in Section <ref>. The results from post-background filtering are shown via two 2-D Doppler-azimuth and range-azimuth angle images to avoid plotting sophisticated 3D images. As discussed in Section <ref>, these images are obtained by performing FFT operations on the raw radar ADC data <cit.>. §.§.§ Results and Analysis The background removal results for the first frame of the simulated data are illustrated in Fig. <ref> and Fig. <ref>. Specifically, we focused on 0 elevation plane for both the Doppler-azimuth and the range-azimuth angle image. In Fig. <ref>(a), which represents the Doppler-azimuth angle image before background removal, we observe that the main Doppler components of the static background are limited to the range [-7m/s, -10m/s]. We also observe that the relationship between the Doppler velocities of the static background and its azimuth angles follows a U-shaped pattern. Mathematically, this can be explained by Eq. (<ref>) when φ=0. After substituting φ=0 into Eq. (<ref>), we get v_r = v_ycosθ - v_xsinθ represents a U-shaped curve when plotted for θ∈ [-π/2, π/2]. After applying the background removal algorithm, the resulting Doppler-azimuth angle image is shown in Fig. <ref>(b). In this image, the mainlobe of the Doppler signal for each azimuth angle has been filtered out, leaving only a few remaining sidelobe components. Fig. <ref>(a) displays the original range-azimuth angle image, where the reflections from the moving car are mixed with clutter from stationary backgrounds (static cars). Separating these components through spatial filtering alone can be challenging. However, by utilizing the proposed background filtering method in the azimuth-elevation-Doppler domain, we obtain a significantly cleaner spatial image, as depicted in Fig. <ref>(b). This processed image effectively distinguishes the extended moving car and removes a significant portion of the clutter present in the original image. To quantitatively assess the performance of the background removal algorithm, we analyzed the range profiles before and after the background removal process. In Fig. <ref>, the range profiles are plotted along with the average amplitudes (in dB) of the target (the moving car) and the background (the static cars). These amplitudes are labelled as the `signal' and `interference', respectively, and the ratio between them (i.e., SIR) is computed as the evaluation metric. Before background removal, the signal amplitude was measured at -3dB, while the interference amplitude was -8dB leading to SIR of -3-(-8)=5dB. After the background removal, the signal amplitude increased to 0dB, while the interference amplitude significantly decreased to -37dB. This results in a significantly improved SIR of 0-(-37)=37dB. The significant improvement in the SIR, from 5dB to 37dB, demonstrates the remarkable clutter suppression capability of the proposed algorithm. § EXPERIMENTS Besides simulations, we also gathered significant measurement data using an off-the-shelf automotive radar testbed that allows us to assess the performance of the proposed static background removal algorithm in a real-world setting. §.§ Experimental Setup and Configuration §.§.§ Setup A testbed was constructed using a TIDEP-01012 77 GHz mmWave radar <cit.> and two cameras, as depicted in Fig. <ref>(a). The Texas Instruments radar evaluation module employs a cascade of four radar chips, enabling greater MIMO dimension<cit.>. For data capture, the testbed was positioned at the front of the vehicle, enabling simultaneous collection of camera images and corresponding radar raw data (post-demodulated I-Q samples) <cit.>. The front view of the radar setup is illustrated in Fig. <ref>(b), featuring a 2D arrangement with 12 TXs and 16 RXs. By employing TDM MIMO techniques <cit.>, the orthogonal signals from different TXs result in the formation of a 2D virtual receiver array, as shown in Fig. <ref>(c), obtained through the spatial convolution of all physical TX and RX pairs <cit.>. The virtual array exhibits a sparse configuration (with 4 non-uniformly spaced elements spanning a 3λ aperture) in the vertical direction, and a large uniform array (consisting of 86 uniformly spaced elements spanning a 42.5λ aperture) in the horizontal direction. §.§.§ Configuration The specific configuration of the cascaded-chip radar used in the data capture is as follows: f_c=77GHz, S_w=45MHz, f_s=15Msps, N_T=12, N_R=16, T_c=20, sweep bandwidth 384MHz, the number of chirps per TX per frame is 128, the number of samples per chirp is 128, and the frame rate is 30fps. Based on Eq. (<ref>), the maximum unambiguous Doppler velocity is 4m/s. §.§ Background Removal Algorithm §.§.§ Scenario The experimental data was collected in a practical scenario with ego car in motion in a complex environment, as shown in Fig. <ref>. The scene consisted of various stationary objects such as fences, trees, buildings, and parked cars on the roadsides. Additionally, there was a bus moving towards the ego vehicle. The testing phase involved the analysis of 15 frames of recorded radar I-Q samples, capturing the dynamics of the scene and the interactions between the ego car and its surroundings. §.§.§ Implementation We implemented the proposed 3D radar odometry algorithm and static background removal algorithm using the following parameter settings: Range, Doppler, Azimuth Angle, Elevant Angle FFTs with 128 points each, CFAR false alarm probability of 10^-2, ambiguity set K={-1, 0, 1}, and measurement error standard deviations σ_θ=0.023, σ_ϕ=0.5, σ_v=0.0634. For the 3D ego-motion estimation, we utilized the RANSAC algorithm with a sample size of 4, a maximum distance threshold of 0.2m/s for determining inliers, and a maximum number of trials set to 2000. The implementation was carried out using MATLAB R2019b on a computer equipped with an Intel i7-9750H CPU. Due to the limited availability of accurate radar ego-motion ground truth in real-world experiments, we were unable to perform a comprehensive evaluation of the 3D ego estimation results. §.§.§ Results and Analysis We present the qualitative results of the static background removal algorithm in Fig. <ref> and Fig. <ref>. Our focus is on the plane of 0 elevation for the Doppler-azimuth angle image and range-azimuth angle image. Fig. <ref> shows the Doppler-azimuth angle image before and after the background removal. In Fig. <ref>(a), we observe that the Doppler components of the stationary background reflections span a range [1.5m/s,3m/s], forming a U-shaped relationship with the azimuth angle. Considering the vehicle's average speed (which exceeds 4m/s), the actual Doppler velocity for the stationary background should be a negative value calculated based on the azimuth angle and radar ego velocity. The measured Doppler velocity within the range of 1.5m/s to 3m/s indicates the presence of Doppler ambiguity (i.e k ≠ 0). By utilizing our proposed 3D ego-motion estimation algorithm, the best k value is estimated as 1, confirming our earlier assumption. In Fig. <ref>(b), we observe that after the background removal, the strong Doppler components for each azimuth angle are mostly filtered out, while the mainlobe from the moving bus and some sidelobe components from the background are still present. Furthermore, we assess the performance of the background removal algorithm on the range-azimuth angle image, as depicted in Fig. <ref>. In Fig. <ref>(a), we observe that the background components are efficiently removed, with the post-removal clutter amplitude below -10dB, while the signal reflections from moving objects exhibit minimal change. This demonstrates the effectiveness of the static background removal algorithm in suppressing clutter and preserving the desired moving targets. § DISCUSSION §.§ Sensitivity Analysis for Ego-motion Estimation The performance of proposed end-to-end background removal algorithm relies on the accuracy of 3D ego-estimation results. It is important to note that the RANSAC algorithm is not guaranteed to find the correct inliers in all cases. It is possible for the 3D ego-estimation algorithm to converge to a model that contains false inliers (i.e., data points that do not belong to the static background) or to miss some of the static elements, especially in challenging scenarios with a few static objects and a high density of moving objects. In the former cases, presence of false inliers in the subsequent processing steps will lead to inaccurate background removal. To showcase the sensitivity of the ego-motion estimation algorithm in such instances, we extend the simulation to different scenarios. The original scenario in the simulation of Section <ref> has 6 static cars off the road and 1 moving car on the road (except the ego car). We increased the number of moving cars to 3, 6, and 10 respectively, and tested the performance of ego-motion estimation. The results in Fig. <ref> show that the ego-motion estimation errors for the scenarios of 1, 3, 6, and 10 moving cars are on a similar level, which indicates that with a sufficient amount of static cars, the ego-motion estimation is robust to number of moving objects. Next, we kept 10 moving cars and reduced the number of static cars to 3 and 1. The results in Fig. <ref> show that the ego-motion estimation works well for having 3 static cars but does not work for only 1 static car. Hence without a sufficient amount of static objects relative to the number of moving objects, the ego-motion estimation algorithm cannot identify the static set correctly using the RANSAC algorithm, which impacts the subsequent steps. In summary, the above shows that ego-motion estimation can work robustly unless there are very few stationary objects (relative to moving targets) in the scene. In such cases, combining other methodologies such as sensor fusion, to improve the robustness of 3D ego-motion estimation may be a potential solution. §.§ Time Complexity Analysis To analyze the time complexity of the algorithms, let's denote the dimensions of the radar data cube as N_s (samples), N_c (chirps), N_h (horizontal antennas), and N_e (vertical antennas). The overall time complexity of the proposed algorithm can be divided into three parts: 3D ego-motion estimation, radar imaging, and background filtering. For the 3D ego-motion estimation, the approximate complexity is 𝒪(N_sN_cN_hN_elogN_sN_c)+𝒪(N_sN_c), which is determined by the range-Doppler FFT and CFAR detection operations. Here, we do not include the complexity of the azimuth angle FFT, elevation angle FFT, and ODR estimator since the number of CFAR detections in these steps is relatively small. Regarding radar imaging, the approximate complexity is 𝒪(N_sN_cN_hN_elogN_hN_e), assuming that we start from the range-Doppler intermediate results obtained in the previous step. For background filtering, the approximate complexity is 𝒪(N_hN_eC), where C is the computation required for every notch filtering, which can be very small based on the analysis presented in the remark. Therefore, the overall time complexity of the algorithm is 𝒪(N_sN_cN_hN_elogN_sN_cN_hN_e), after summing up the complexities mentioned above and ignoring the smaller terms. We compare the time complexity of the proposed algorithm with two best-known reduced-rank STAP baselines: adjacent-bin post-Doppler STAP <cit.> and joint domain localized STAP <cit.>. Note that two baselines do not consider the elevation domain N_e in its degree of freedom for filtering. For a fair comparison to our algorithm, we assume the two baselines repeat the filtering operation at each elevation sample and incoherently sum the results. For adjacent-bin post-Doppler STAP, the overall time complexity is determined by the range-Doppler estimation, which is 𝒪(N_sN_cN_hN_elogN_sN_c), and the space-time adaptive processing with reduced DOF N_hL, where L is the number of adjacent Doppler bins. For space-time adaptive processing, the complexity of processing one Doppler-angle position includes covariance matrix calculation 𝒪(2(N_hL)^3), covariance matrix inverse 𝒪((N_hL)^3), and weight computation and application 𝒪((N_hL)^2+(N_hL)) <cit.>. By summing up these complexities for all range bins, all Doppler-angle positions, and all elevation samples, the overall time complexity of space-time adaptive processing is 𝒪(N_sN_cN_hN_e(N_hL)^3). Therefore, the overall time complexity for adjacent-bin post-Doppler STAP is 𝒪(N_sN_cN_hN_e(logN_sN_c+(N_hL)^3)). Compared to adjacent-bin post-Doppler STAP, the joint domain localized STAP further reduces the DOF to HL through beamforming on the antenna (space) domain, where H is the number of adjacent angle bins. Similar to the previous analysis, we can break down the time complexity of joint domain localized STAP into two parts: range-Doppler-angle estimation, which has a complexity of 𝒪(N_sN_cN_hN_elogN_sN_cN_h), and space-time adaptive processing, which has a complexity of 𝒪(N_sN_cN_hN_e(HL)^3). Therefore, the overall time complexity of joint domain localized STAP is 𝒪(N_sN_cN_hN_e(logN_sN_cN_h+(HL)^3)). Based on the above analysis and the results summarized in Table. <ref>, it is evident that the proposed algorithm has a significantly lower time complexity compared to the two state-of-the-art reduced-rank STAP methods. This is particularly advantageous when the N_e dimension is typically very small in practical scenarios, resulting in logN_hN_e≪ (N_hL)^3 and logN_e≪ (HL)^3. § CONCLUSION This paper presents an efficient algorithm for background removal in automotive radar applications using FMCW radar. The algorithm encompasses radar signal preprocessing, 3D ego-motion estimation, and clutter removal in the azimuth-elevation-Doppler domain using a notch filter approach. Extensive evaluations and analyses have demonstrated the algorithm's effectiveness in suppressing background clutter and reducing computation. However, the proposed background removal algorithm highly relies on the accuracy of 3D ego-estimation results, which may lack performance when applied to real-world conditions where there are few stationary objects and the 3D ego-estimation works poorly. Besides, the current 3D ego-motion estimation does not consider the temporal consistency across frames, which leads to a suboptimal solution. As a future direction, we plan to explore and develop robust techniques for enhancing 3D ego-motion estimation accuracy (e.g., multi-sensor fusion, incorporating multiple frames), which has the potential to address this issue and further improve the overall performance of the algorithm. IEEEtran
http://arxiv.org/abs/2307.08671v2
20230702080802
Deep Cross-Modal Steganography Using Neural Representations
[ "Gyojin Han", "Dong-Jae Lee", "Jiwan Hur", "Jaehyun Choi", "Junmo Kim" ]
cs.CR
[ "cs.CR", "cs.AI" ]
A re-examination to the SCoTLASS problems for SPCA and two projection-based methods for themt1 [ ============================================================================================== Steganography is the process of embedding secret data into another message or data, in such a way that it is not easily noticeable. With the advancement of deep learning, Deep Neural Networks (DNNs) have recently been utilized in steganography. However, existing deep steganography techniques are limited in scope, as they focus on specific data types and are not effective for cross-modal steganography. Therefore, We propose a deep cross-modal steganography framework using Implicit Neural Representations (INRs) to hide secret data of various formats in cover images. The proposed framework employs INRs to represent the secret data, which can handle data of various modalities and resolutions. Experiments on various secret datasets of diverse types demonstrate that the proposed approach is expandable and capable of accommodating different modalities. Deep Steganography, Implicit Neural Representation, Data Hiding § INTRODUCTION Steganography is a technique that involves embedding secret data, such as binary messages, images, audio, or video, into another message or data in a way that makes it difficult to detect. The objective of steganography is to add an extra layer of security to communication, making it harder for unauthorized parties to detect the presence of hidden information. Recently, there has been a surge of interest in utilizing deep neural networks (DNNs) in steganography pipelines. Many of these approaches, such as those outlined in <cit.>, use DNNs as encoders and decoders for embedding and extracting hidden information. These deep steganography techniques, which involve end-to-end training of DNNs, have demonstrated advantages over traditional steganography methods in terms of capacity and security, while also being simpler to implement. However, most of these methods are limited in scope and focus only on specific data types, so they do not provide a comprehensive solution for data hiding. The majority of deep steganography approaches concentrate on hiding binary messages <cit.> or natural images <cit.>. While some efforts have been made to use deep steganography to hide video <cit.> or audio <cit.> within the same data type, there is a shortage of research on cross-modal steganography. This type of steganography involves hiding secret data in cover data that is in a different format than the secret data. This can be particularly challenging when the secret data has higher dimensions than the cover data, such as hiding a video in an image. To address these limitations, we establish a new deep cross-modal steganography framework using Implicit Neural Representations (INRs) that can be generalized to tasks with secret data of various formats. We employed INRs to represent the secret data, as INRs have the ability to handle data of various modalities and resolutions as seen in Fig. <ref>. Under this framework, the objective is to send secret data expressed in INR through a container image with minimal distortion from a cover image. To achieve this goal, we make the following contributions: * We enable the sender and recipient to share a base network and transmit a portion of the weights constituting that network in the form of an image. * To reduce quantization error that occurs during the conversion process from the weights to the images, we propose a simplified quantization-aware training. We demonstrate the expandability of our method and potential for various applications through experiments on datasets of various modalities. § METHOD §.§ Overview Our goal is to solve the cross-modal steganography problem, specifically hiding secret data 𝐒 of various modalities (such as a video, audio, or 3D shape) into a single cover image 𝐂. The container image 𝐂' including the information about 𝐒 should be difficult to distinguish from the cover image 𝐂 by human observation. In our framework, the sender and recipient share the architecture of an INR, referred to as the base network. The base network is composed of variable weights (𝐖_v) and pre-defined weights (𝐖_p). The hiding stage is carried out by training the INR so that the container image 𝐂' converted from the 𝐖_v is close to the cover image 𝐂, while simultaneously making the revealed secret S' reconstructed by the INR to be close to the secret S. The decoding can be accomplished by reconstructing the entire INR using 𝐖_v which is converted from 𝐂' and 𝐖_p which is pre-defined. This is achieved by treating each channel of the container image as a weight matrix and inserting the weight matrices with converted image channels in the base network. §.§ Deep Cross-Modal Hiding Framework In this paper, we consider, but not limited to, a multi-layer perceptron (MLP) with L layers as an example of INR f_θ where the parameter θ can be defined as a set of matrices such that θ = {𝐖_l|𝐖_l ∈ℝ^C_in× C_out}_l=0^L-1 and C_in and C_out represent the number of units in each layer before and after 𝐖_l. For the proposed method, the sender and recipient share the pre-defined weights 𝐖_p and positions of variable weights. Formally, the variable weights 𝐖_v refer to a subset of the weights constituting the base network that can be trained. Additionally, they share the hyperparameters w_min and w_max, which are required for scaling and quantization. We create a container image 𝐂' by stacking the 𝐖_v and scaling from the range of [w_min, w_max] to [0, 255] with the idea that weight matrix has the same matrix form as an image channel. Therefore, the 𝐖_v are trained for two objects during the hiding phase: 1) the INR including the 𝐖_v should represent secret data S well, and 2) the container image 𝐂' converted from the stack of the 𝐖_v should be close to the cover image 𝐂. During the hiding phase, all other parts of the base network except for the 𝐖_v are fixed. For the first object, the INR f(x_i)_θ: ℝ^k →ℝ^m are trained to approximate the data point d_i ∈ℐ of secret data 𝐒 according to the input coordinate vectors x_i ∈ℐ, where ℐ is a pre-defined set of input index. During the training, the 𝐖_v are trained with a secret data reconstruction loss ℒ_s, where ℒ_s(f_θ) = ∑_i ∈ℐ‖ d_i - f_θ(x_i) ‖^2_2. For the second object, we constrain a stack of the variable weight matrices 𝐖_s ∈ℝ^N × N × 3 to resemble the cover image 𝐂∈ℝ^N × N × 3. That is, we aim to directly utilize scaled 𝐖_s as a container image 𝐂' by minimizing a loss ℒ_c(𝐖_s) = ‖𝐖_s - (𝐂/255 · (w_max - w_min) + w_min) ‖ ^2_2. However, we did not consider the quantization error caused by converting the weights to images in this section. Hence, it is imperative to address the quantization error during the hiding process. §.§ Quantization-Aware Training for Weight-to-Image Conversion It should be noted that converting weights stored in FP32 to an image stored in UINT8 can cause a significant perturbation, leading it to deviate from the converged state. This poses a major challenge and needs to be handled carefully during the process. To address this, we introduce simplified quantization-aware training (QAT) for weight-to-image conversion. Originally, QAT is a method that is widely used in network compression problems, which considers the quantization process during training. We use QAT with some modifications to solve the problem that the weights are shifted from the converged state during conversion. We use a quantization module f_θ_Q with quantized variable weights 𝐐_v and the same pre-defined weights, which simulates the quantization process during the training. For the simulation, the variable weights 𝐖_v are quantized to 𝐐_v for every updates of 𝐖_v as below formula: 𝐐_v = ROUND(255·(𝐖_v - w_min) / (w_max - w_min)) 𝐐_v ←𝐐_v/255 · (w_max - w_min) + w_min. The quantization module f_θ_Q is used to calculate the gradient dℒ/dQ_v. As explained earlier, the total loss for the quantized network ℒ is: ℒ = ℒ_s(f_θ_Q) + β·ℒ_c(𝐐_s) where 𝐐_s is a stack of 𝐐_v. To back-propagate the gradient, we convert dℒ/dQ_v into dℒ/dW_v with straight through estimator (STE) <cit.>. Therefore, W_v is finally updated as follows: dℒ/dW_v ≈STE(dℒ/dQ_v) W_v ←W_v - α· dℒ/dW_v. The provided Fig. <ref> illustrates the proposed steganography framework as a whole. § EXPERIMENTS We experiment with the proposed method on three tasks: video-into-image steganography, audio-into-image steganography, and shape-into-image steganography. These tasks have not been explored well due to the difficulty that secret data has higher dimensions than cover data or contains temporal information. We demonstrate the performance and flexibility of our method by solving these challenging tasks for the first time. §.§ Experimental Setting Datasets. In common to all three tasks, cover images are randomly sampled from the ImageNet <cit.> by the number of data in the corresponding secret dataset, and all sampled cover images are resized to 512 × 512. For video-into-image steganography, we use the Densely Annotated VIdeo Segmentation dataset (DAVIS) <cit.> as a secret dataset. We resize the secret videos to 128 × 128 and extract 16 frames from the videos. For audio-into-image steganography, we hide audio data from GTZAN music genre dataset <cit.>. Each hidden audio data is cropped to a length of 100,000 samples with a 22,050 sample rate (4.54 seconds). For shape-into-image steganography, we selected two samples, Stanford Bunny and Dragon, as secret three-dimensional (3D) shapes from The Stanford 3D Scanning Repository <cit.>. Details. We use SIREN <cit.> with four hidden layers (total six layers) as the architecture of INRs to hide video and audio, and IGR <cit.> with six hidden layers (total eight layers) as the architecture of INRs to hide shapes. For SIREN and IGR, {𝐖_1,𝐖_2,𝐖_3} and {𝐖_1,𝐖_2,𝐖_4} are the variable weights and represent color channels of the container image, respectively. The remaining pre-defined weights 𝐖_p are fixed in an initialized state. The INRs are trained for 5,000 steps for video and shapes, and 20,000 steps for audio using Adam optimizer. In addition, the learning rate of the optimizer is set to 1 × 10^-3 for video and audio and set to 1 × 10^-4 for shapes. §.§ Experimental Results Video-into-image steganography. The Average Pixel Discrepancy (APD) of the cover and secret is calculated as the L_1 distance between the cover and container and that between the frames of the secret video and revealed secret video, respectively. In addition to APD, we report Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and Perceptual Similarity (LPIPS) in the same way in Table <ref>. The qualitative results for the video-into-image steganography can be confirmed in Fig. <ref>. Audio-into-image steganography. For evaluation of reconstructed audio, we report Absolute Error (AE) and Signal-to-Noise Ratio (SNR) between the secret and revealed secret instead of APD and PSNR in Table <ref>. In addition, the audios of the secret and revealed secret are visualized as mel-spectrograms in Fig. <ref>. Shape-into-image steganography. We experiment with the proposed method on the shape-into-image steganography to demonstrate that the proposed method has a significantly broader scope of application than conventional deep steganography methods, as 3D shapes are fundamentally different from the data that previous deep steganography methods attempted to conceal. In Fig. <ref>, we present the reconstructed Stanford Bunny and Dragon from various viewpoints. Ablation study. We measure the performance difference of video-into-image steganography depending on whether QAT is applied or not. The experimental results in Table <ref> show that when QAT is not applied, the revealed secret is very altered from the original due to the weight shifts by quantization. We also present the visual effect of QAT through the sample included in Fig. <ref>. §.§ Discussion and Future Work Through our experiments, we show that it is possible to hide secret data of various modalities with only subtle changes to the cover image. However, there is a limitation that the format of cover data is fixed as an image, and there is some loss in high-frequency information for revealed secret. Since this paper introduces a steganography solution using INRs for the first time, we believe that there is still room for improvement and various applications. We also look forward to future work to improve the robustness of the proposed approach against container distortions such as blur or JPEG compression. § CONCLUSION We proposed a novel approach to deep cross-modal steganography utilizing INRs, which enables the hiding of data of diverse modalities within images. We believe that our proposed method will stimulate further investigation, as it introduces a fresh research direction for deep steganography. § ACKNOWLEDGEMENT This work was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City. IEEEbib
http://arxiv.org/abs/2307.03145v1
20230706171621
Chaos in a tunneling universe
[ "Martin Bojowald", "Ari Gluckman" ]
gr-qc
[ "gr-qc", "hep-th" ]
Spherically symmetric elastic bodies in general relativity Artur Alho^1Electronic address:[email protected] , José Natário^1Electronic address:[email protected] , Paolo Pani^2Electronic address:[email protected] and Guilherme Raposo^3Electronic address:[email protected] ^1Center for Mathematical Analysis, Geometry and Dynamical Systems, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal. ^2Dipartimento di Fisica, Sapienza Università di Roma & INFN Roma1, Piazzale Aldo Moro 5, 00185, Roma, Italy ^3Centre for Research and Development in Mathematics and Applications (CIDMA), Campus de Santiago, 3810-183 Aveiro, Portugal August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Quantum effects such as tunneling may be relevant in the early universe at high temperature and curvature. Early models <cit.> have recently been extended to oscillating versions <cit.> in which the evolving scale factor, classically trapped in a finite region, may escape by quantum tunneling and approach a singularity <cit.>. An application <cit.> of quasiclassical methods revealed non-trivial features of the tunneling dynamics that is not captured by the traditional derivation of tunneling coefficients from stationary states. In particular, some of the dynamics was found to depend sensitively on the choice of initial values in the trapped region, suggesting chaotic behavior. The purpose of the present paper is to confirm this suspicion by a dedicated analysis. The physical relevance of chaos in universe models can be seen by going beyond the first approximation of an exactly homogeneous universe. At low curvature, observations of large-scale structure indicate that approximate spatial homogeneity is a good late-time assumption, but it is unlikely to hold in the early universe. At large density and curvature, the gravitational dynamics is rather dominated by attraction to denser regions and their subsequent collapse, suggesting a very inhomogeneous distribution out of which our universe may have arisen by cosmic inflation. The corresponding rapid expansion would then have magnified and diluted a small region that eventually formed our visible universe. In the initial distribution, however, this small region would have been only one tiny patch. Thanks to its smallness, it may be assumed to be nearly homogeneous and approximately described by simple (classical or quantized) Friedmann dynamics. But its properties were determined by high-density features that are more involved than those tested at late times. It has been known for some time that the classical dynamics of such a patch is chaotic <cit.>, provided it includes effects of anisotropy (while still being spatially homogeneous). Such a dynamics is expected asymptotically close to a spacelike singularity according to the Belinskii–Khalatnikov–Lifshitz (BKL) scenario <cit.>. Given the asymptotic nature of this model, the effects of this kind of chaos are most pronounced in backward evolution closer and closer to the big-bang singularity. They are therefore relevant for a conceptual analysis of possible initial conditions at the very beginning of the universe, but their implications for potential observations, seen for instance through the magnifying glass of inflation, would be rather indirect. Once certain matter effects start being relevant, the anisotropic asymptotic geometry may isotropize <cit.>, a property that is also desirable for models of inflation. In an intermediate phase between an asymptotically early BKL regime and the (still early) beginning of inflation, anisotropy may be ignored while quantum effects are strong. The results presented here show that even the isotropic dynamics is chaotic if it is described quasiclassically by including quantum fluctuation terms, applied in the specific analysis to tunneling-type potentials as in oscillating models. We will use the same model and quasiclassical extensions as derived in <cit.>, reviewed in the next section, and show proofs of chaos based on a numerical analysis of the fractal domension in a space of initial values. In our conclusions we will demonstrate which features of the specific potential are likely to be responsible for chaos. Compared with BKL-type chaos, the new chaotic features identified here, closer to the onset of inflation, may have phenomenological implications which we leave for future analysis. § QUASICLASSICAL MODEL The classical model and its potential, introduced in <cit.> follow from the Friedmann equation ȧ^2/a^2 + k/a^2= 8π G/3(Λ+σ/a+p_ϕ^2/2a^6) with positive spatial curvature, k>0, a negative cosmological constant Λ<0, and two matter contributions, one with energy density σ/a where σ>0, and one from a free, massless scalar field ϕ with momentum p_ϕ. Our results do not depend much on the specific features of the contributions from σ and p_ϕ, other than the trapped potential region they form together with the curvature term. For the latter, we choose k=1, but smaller values are also possible; see <cit.> for more details. The quasiclassical methods we use are canonical. We therefore replace the time derivative ȧ of the scale factor with the standard momentum p_a=-3/4π G a ȧ in Friedmann cosmology; see for instance <cit.>. The Friedmann equation (<ref>) can then be written as 0= 16/9π^2G^2 p_a^2+ a^2 U_ harmonic(a)- p̃^2/a^2 with a harmonic potential U_ harmonic(a)= ω^2(a-γ/ω)^2+k-γ^2 expressed in terms of the parameters ω=√(-8π GΛ/3) γ=√(-2π Gσ^2/3Λ) . The scalar contribution is not harmonic, and only slightly rewritten by introducing p̃=√(4π G/3) p_ϕ . Finally, we perform a canonical transformation from (a,p_a) to (α,p_α) where α=ln(ωγ a) and p_α=ap_a= -3/4π G a^2ȧ . In these variables, the Friedmann equation is equivalent to 0=p_α^2+U_p(α) with the potential U_p(α) = e^4α/β^2(k-2e^α+e^2α/γ^2) -p^2 and β=4π G/3ω^2γ^2=(4π G/3)^3 σ^2 , p=3/4π Gp̃=√(3/4π G) p_ϕ . As shown in Figure <ref>, the contribution from the scalar field can now be seen as opening up a classically allowed region around α→-∞, making it possible for the universe to tunnel from the trapped region, formed by the other contributions, to a big-bang singularity. On the other side of the α-axis, expansion to infinite size is prevented by the steep positive potential contribution proportional to e^6α, which is implied by the negative cosmological constant. The density contribution σ/a provides a negative contribution to the potential U_p(α) that forms a trapped region at intermediate values of α, separated from the asymptotic free region around α→-∞ by a positive barrier implied by the curvature term. For the intermediate negative contribution to form a trapped region, it must be dominant between the two regions implied by the contributions from spatial curvature and the cosmological constant, respectively. In the original Friedmann equation, the corresponding energy density must therefore follow a behavior between the power laws a^-2 of the curvature term and a^0 of the cosmological constant. This requirement explains the non-standard a-dependence of the energy density σ/a. The quasiclassical dynamics of a given classical system in canonical form is obtained by viewing variables such as α and p_α as expectation values of the corresponding operators, taken in an evolving quantum state. Any non-harmonic potential then implies that these variables couple to fluctuations, correlations, and higher moments, implying dynamics in a higher-dimensional configuration space. Coupling terms can be derived by equipping moments with a Poisson bracket and inserting them in the expectation value of the Hamilton operator of the system, taken in the same state in which the moments are computed <cit.>. In general, the usual central moments do not immediately appear in canonically conjugate form, but suitable canonical pairs exist locally thanks to the Darboux theorem. Such canonical variables have been derived for moments up to fourth order <cit.>. For second-order moments, as a first approximation, canonical moment variables for a single pair of degrees of freedom, such as (α,p_α) here, have been known for some time, discovered independently in a variety of fields <cit.>: There is an independent canonical pair (s,p_s) that describes second-order moments according to Δ(α^2) = s^2 Δ(α p_α) = sp_s Δ(p_α^2) = p_s^2+U/s^2 , where U is a constant bounded from below by U≥ħ^2/4 by the uncertainty relation. We are using a general notation for moments Δ(A^aB^b)=⟨(Â-⟨Â⟩)^a(B̂-⟨B̂⟩)^b⟩_ symm in completely symmetric (or Weyl) ordering. According to this notation, the two variances for a single canonical pair are (Δα)^2=Δ(α^2) and (Δ p_α)^2=Δ(p_α^2) and the covariance is Δ(α p_α). If the classical Hamiltonian is H(α,p_α), the new canonical variables for second-order moments can be introduced in an effective Hamiltonian by performing a Taylor expansion of H(α+δα, p_α+δ p_α) around a generic pair (α,p_α) and replacing terms quadratic in δα and δ p_α with the moments (<ref>)–(<ref>). The effective energy expression, derived from (<ref>) in the cosmological model, then reads 0=p_α^2+p_s^2+ U/s^2+ U_p(α)+ 1/2 U_p”(α) s^2 where U_p(α) is given in (<ref>). (The classical equation (<ref>) is a Hamiltonian constraint, which in a quantization is turned into a constraint operator that annihilates physical states. This condition restricts not only expectation values of basic operators by an equation approximated by (<ref>) semiclassically, but also fluctuations and higher moments of the state <cit.>. Since the constraint does not depend on ϕ but only on its momentum, we can assume that moment constraints are solved by using restricted values for ϕ-moments. The latter do not appear in the effective constraint and we do not need specific solutions for them.) While the momentum dependence of (<ref>) is quadratic and does not imply higher-order terms in the Taylor expansion, the potential is not harmonic. We therefore ignore certain quantum corrections in a truncation to second order in s, given by (<ref>). Tunneling, a process during which a wave packet splits up into at least two smaller packets, is likely to depend on moments of order higher than two. We should therefore amend (<ref>) by suitable higher-order terms, while keeping the system sufficiently simple for an initial analysis. In particular, higher-order moments, in a canonical formulation, describe degrees of freedom independent of both (α,p_α) and (s,p_s), and therefore lead to configuration spaces of large dimensions if they are included in complete form. As an approximation, it is possible to include some higher-moment effects without higher dimensions by making an ansatz for the possible behavior of moments on the quantum degree of freedom (s,p_s) already introduced for second order. Dimensional arguments suggest the power-law form Δ(α^n)∝ s^n for an α-moment of order n. If this form were realized exactly with coefficient one, the Taylor expansion of the effective potential could be summed up analytically: 0=p_α^2+ p_s^2+ U/s^2+ U_p(α)+ ∑_n=2^∞U_p^(n)/n!Δ(α^n) =p_α^2+p_s^2+U/s^2 +1/2(U_p(α+s)+U_p(α-s)) . As in <cit.>, following <cit.>, we bring the fourth-order term closer to Gaussian form, where Δ(α^4)=3s^4 rather than s^4, by adding the final term in 0=p_α^2+p_s^2+U/s^2 +1/2(U_p(α+s)+U_p(α-s))+ 1/12 U_p^””(α)s^4 . Our analysis of chaos will use mainly the small-s dynamics in the trapped region before much tunneling happens, in which case the quasiclassical approximation is expected to be reliable. § TUNNELING AND CHAOS Tunneling is quasiclassically described by motion in the (α,s)-plane, relying on several characteristic features of the effective potential in (<ref>); see Figure <ref>. By the addition and subtraction of s in the terms resulting from (<ref>), the classically trapped region is extended to a diagonal channel in the s-direction. The channel is bounded by a finite wall to the left and a steep increasing wall to the right. Since the wall on the left has a height lower than the classical barrier for sufficiently large s, tunneling is possible when a quasiclassical trajectory crosses over the wall into the unbounded region to the left, where it will then continue almost freely except for one possible reflection at the U/s^2 potential; see Fig. <ref>. However, since the quasiclassical model does not capture all features of tunneling, there are also trajectories that never cross over the wall even if their energy would be sufficient for full quantum mechanical tunneling; see Fig. <ref>. Such trajectories are stuck in the channel and keep bouncing between the two walls, moving to ever larger s. Here, the quasiclassical approximation will eventually break down. Trajectories stuck in the channel therefore do not describe correct features of tunneling, but their presence allows us to draw an important distinction between two types of trajectories: Those that tunnel correctly by moving over the wall on the left, and those that get stuck in the channel. The latter can be split into subcases of trajectories getting stuck only to one side in time (future or past; see Fig. <ref>) or to both sides. Physically, any trajectory that gets stuck corresponds to a wave function for which higher moments are relevant, while trajectories that do not get stuck have a tunneling process well described by lower moments. A detailed analysis of trajectories, given in <cit.>, showed that the potential (<ref>) does not reliably describe tunneling because the uniform nature of the channel, seen in Figure <ref>, makes it much more likely for trajectories to follow the channel, rather than crossing the wall to the left. The fourth-order modification in (<ref>), motivated by a more Gaussian behavior of states, was found to improve the tunneling description by quasiclassical trajectories. With this modification, the channel acquires new features at small s, shown in Figure <ref>, that can help to turn trajectories toward the channel wall. There were two indications for chaos in this dynamics found in <cit.>: A sensitive dependence of long-term outcomes of trajectories on their initial values in the classically trapped region; and the shape of the confining walls around the classically trapped region, extended to the (α,s)-plane. Since the latter have convex or defocussing contributions, especially with the fourth-order modification as seen in Figure <ref>, mathematical arguments from dynamical billiard systems may be used to infer the possibility of chaotic features <cit.>, as done also in other cosmological models <cit.>. However, the walls are not completely convex. A dedicated analysis is therefore required to determine properties of chaos <cit.>. We do so now by numerical computations of the fractal dimension of sets of initial values in the bottom part of the channel that give rise to the same long-term outcome of tunneling trajectories. The sensitivity to the choice of initial values is illustrated by Figure <ref>. The procedure for calculating the fractal dimension of the model and demonstrating chaos involved generating a 50× 50 lattice of distinct tunneling results, illustrated in Figure <ref>. The points in the sample region were classified as fully trapped, partially trapped, or untrapped based on the final state of the model. The lattice was analyzed using the uncertainty exponent analysis for sensitive fractal boundaries demonstrated in <cit.>, a function which scales as a power of the radius (or the size of the basin boundary), f(δ)∼δ^ϵ where ϵ is the uncertainty coefficient. Points are taken within a radius δ, centered sequentially on each lattice point. The fraction of points within δ demonstrating different final states from the initial point was calculated for each lattice element, as in Figure <ref>. Following <cit.>, systems where ϵ=1 are not chaotic — no uncertainty appears when varying initial conditions. The range of ϵ for chaotic systems is given by 0≤ϵ<1. Lower values of ϵ indicate that the system is more chaotic. As emphasized in <cit.>, measuring chaos using this basin method <cit.> is more suitable for relativistic or time reparameterization invariant systems because it depends only on the final outcomes of trajectories and not on their parameterization by time, unlike for instance the computation of Lyapunov exponents. Reparameterization of the time variable will not alter the final states indicated on the lattice. As also used recently in <cit.>, the method can easily be generalized to quasiclassical descriptions of quantum systems. Furthermore, ϵ satisfies <cit.> ϵ=D-D_0 where D is the dimension of the phase space and D_0 indicates the dimension of the boundary which divides the regions with different outcomes. In chaotic systems, D_0 assumes a non-integer value which means that the boundaries are fractal. Computing the value of ϵ for the model produced Figure <ref>, using values 1≤δ≤ 10. A linear fit of the data in a double logarithmic plot revealed the slope ϵ=0.129, see Figure <ref>, and is thus a reliable indicator of chaos. § CONCLUSIONS Our results demonstrate that the quasiclassical dynamics of the oscillating universe model studied here is chaotic. The classical system has a 1-dimensional configuration space and therefore cannot have chaos, but quantization implies additional independent degrees of freedom such as the fluctuation parameter s in our analysis. The appearance of chaos here is therefore distinct from the traditional notion of quantum chaos, which is usually analyzed in situations in which the classical system is already chaotic. Our model is closer to discussions of chaos in Bohmian quantum mechanics, such as <cit.>, in which the quantum potential plays the role of our quasiclassical potential. Compared with Bohmian quantum mechanics, our analysis is held completely at a phase-space level as in classical mechanics. Quantum effects are described by moments in a canonical parameterization, implying new configuration variables and momenta. The full wave function is approximated and ultimately replaced by the moments and does not play an intermediary role, for instance as the wave function of Bohmian quantum mechanics used to compute the quantum potential. Standard quantitative methods to analyze chaos can therefore be applied directly. A qualitative argument, using the convex nature of some portions of the potential walls bounding the quasiclassical trapped region, demonstrate a relationship between chaos and detailed properties of the quantum state. In particular, our quantitative results about chaos refer to a quasiclassical potential with fourth-order moments of Gaussian form, which compared with other choices of moments leads to more convex walls as seen in Figure <ref>. Our results therefore suggest that details of quantum states and properties of their quantum information may have direct implications on important features of the early-universe dynamics. We are grateful to Sara Fernández Uria for bringing our attention to the basin method and for discussions about it. This work was supported in part by NSF grant PHY-2206591. 10 tunneling A. Vilenkin, Quantum creation of universes, Phys. Rev. D 30 (1984) 509–511. OscillatingFriedmann M. P. Da̧browski, Oscillating friedman cosmology, Ann. Phys. 248 (1996) 199–219, [http://xxx.lanl.gov/abs/gr-qc/9503017gr-qc/9503017]. OscillatingSimple P. W. Graham, B. Horn, S. Kachru, S. Rajendran, and G. Torroba, A simple harmonic universe, JHEP 02 (2014) 029, [http://xxx.lanl.gov/abs/1109.02821109.0282]. OscillatingTunnel M. P. Da̧browski and A. L. Larsen, Quantum tunneling effect in oscillating friedmann cosmology, Phys. Rev. D 52 (1995) 3424–3431, [http://xxx.lanl.gov/abs/gr-qc/9504gr-qc/9504]. OscillatingCollapse A. T. Mithani and A. Vilenkin, Collapse of simple harmonic universe, JCAP 01 (2012) 028, [http://xxx.lanl.gov/abs/1110.40961110.4096]. OscillatingRate A. T. Mithani and A. Vilenkin, Tunneling decay rate in quantum cosmology, Phys. Rev. D 91 (2015) 23511, [http://xxx.lanl.gov/abs/1503.004001503.00400]. TunnelingUniverse M. Bojowald and P. Petersen, Tunneling dynamics of an oscillating universe model, JCAP 05 (2022) 007, [http://xxx.lanl.gov/abs/2110.094912110.09491]. ChaosGR J. D. Barrow, Chaotic behaviour in general relativity, Phys. Rep. 85 (1982) 1–49. Farey N. J. Cornish and J. J. Levin, The mixmaster universe: A chaotic farey tale, Phys. Rev. D 55 (1997) 7489. ChaosRel A. E. Motter, Relativistic chaos is coordinate invariant, Phys. Rev. Lett. 91 (2003) 231101. Billiards T. Damour, M. Henneaux, and H. Nicolai, Cosmological billiards, Class. Quantum Grav. 20 (2003) R145–R200, [http://xxx.lanl.gov/abs/hep-th/0212256hep-th/0212256]. BKL V. A. Belinskii, I. M. Khalatnikov, and E. M. Lifschitz, A general solution of the einstein equations with a time singularity, Adv. Phys. 31 (1982) 639–667. Isotropize C. W. Misner, The isotropy of the universe, Astrophys. J. 151 (1968) 431–457. Infrared M. Bojowald, The bkl scenario, infrared renormalization, and quantum cosmology, JCAP 01 (2019) 026, [http://xxx.lanl.gov/abs/1810.002381810.00238]. Foundations M. Bojowald, Foundations of Quantum Cosmology. IOP Publishing, London, UK, 2020. EffAc M. Bojowald and A. Skirzewski, Effective equations of motion for quantum systems, Rev. Math. Phys. 18 (2006) 713–745, [http://xxx.lanl.gov/abs/math-ph/0511043math-ph/0511043]. Karpacz M. Bojowald and A. Skirzewski, Quantum gravity and higher curvature actions, Int. J. Geom. Meth. Mod. Phys. 4 (2007) 25–52, [http://xxx.lanl.gov/abs/hep-th/0606232hep-th/0606232]. Proceedings of “Current Mathematical Topics in Gravitation and Cosmology” (42nd Karpacz Winter School of Theoretical Physics), Ed. Borowiec, A. and Francaviglia, M. Bosonize B. Baytaş, M. Bojowald, and S. Crowe, Faithful realizations of semiclassical truncations, Ann. Phys. 420 (2020) 168247, [http://xxx.lanl.gov/abs/1810.121271810.12127]. EffPotRealize B. Baytaş, M. Bojowald, and S. Crowe, Effective potentials from canonical realizations of semiclassical truncations, Phys. Rev. A 99 (2019) 042114, [http://xxx.lanl.gov/abs/1811.005051811.00505]. VariationalEffAc R. Jackiw and A. Kerman, Time dependent variational principle and the effective action, Phys. Lett. A 71 (1979) 158–162. GaussianDyn F. Arickx, J. Broeckhove, W. Coene, and P. van Leuven, Gaussian wave-packet dynamics, Int. J. Quant. Chem.: Quant. Chem. Symp. 20 (1986) 471–481. EnvQuantumChaos R. A. Jalabert and H. M. Pastawski, Environment-independent decoherence rate in classically chaotic systems, Phys. Rev. Lett. 86 (2001) 2490–2493. QHDTunneling O. Prezhdo, Quantized hamiltonian dynamics, Theor. Chem. Acc. 116 (2006) 206. CQC T. Vachaspati and G. Zahariade, A classical-quantum correspondence and backreaction, Phys. Rev. D 98 (2018) 065002, [http://xxx.lanl.gov/abs/1806.051961806.05196]. CQCFieldsHom M. Mukhopadhyay and T. Vachaspati, Rolling with quantum fields, http://xxx.lanl.gov/abs/1907.037621907.03762. EffCons M. Bojowald, B. Sandhöfer, A. Skirzewski, and A. Tsobanjan, Effective constraints for quantum systems, Rev. Math. Phys. 21 (2009) 111–154, [http://xxx.lanl.gov/abs/0804.33650804.3365]. EffConsRel M. Bojowald and A. Tsobanjan, Effective constraints for relativistic quantum systems, Phys. Rev. D 80 (2009) 125008, [http://xxx.lanl.gov/abs/0906.17720906.1772]. EffConsComp M. Bojowald and A. Tsobanjan, Effective constraints and physical coherent states in quantum cosmology: A numerical comparison, Class. Quantum Grav. 27 (2010) 145004, [http://xxx.lanl.gov/abs/0911.49500911.4950]. QuantumHiggsInflation M. Bojowald, S. Brahma, S. Crowe, D. Ding, and J. McCracken, Quantum higgs inflation, Phys. Lett. B 816 (2021) 136193, [http://xxx.lanl.gov/abs/2011.023552011.02355]. EffPotInflation M. Bojowald, S. Brahma, S. Crowe, D. Ding, and J. McCracken, Multi-field inflation from single-field models, JCAP 08 (2021) 047, [http://xxx.lanl.gov/abs/2011.028432011.02843]. Sinai Y. G. Sinai, Dynamical systems with elastic reflections. ergodic properties of dispersing billiards., Russian Mathematical Surveys 25 (1970) 137–189. FocusingChaos L. A. Bunimovich, On billiards close to dispersing, Mathematical USSR Sbornik 95 (1974) 49–73. BasinChaos S. W. McDonald, C. Grebogi, E. Ott, and J. A. Yorke, Fractal basin boundaries, Physica D: Nonlinear Phenomena 17 (1985) 125. QuasiClassChaos M. Bojowald, D. Brizuela, P. Calizaya Cabrera, and S. Uria, The chaotic behavior of the bianchi ix model under the influence of quantum effects, http://xxx.lanl.gov/abs/2307.000632307.00063. BohmianChaos G. Contopoulos and A. C. Tzemos, Chaos in bohmian quantum mechanics: A short review, Regul. Chaot. Dyn. 25 (2020) 476–495, [http://xxx.lanl.gov/abs/2009.058672009.05867].
http://arxiv.org/abs/2307.02061v1
20230705065841
Randomness Certification from Multipartite Quantum Steering for Arbitrary Dimensional Systems
[ "Yi Li", "Yu Xiang", "Xiao-Dong Yu", "H. Chau Nguyen", "Otfried Gühne", "Qiongyi He" ]
quant-ph
[ "quant-ph" ]
[4]
http://arxiv.org/abs/2307.00318v1
20230701120722
The calculation of the asymptotic charges at the critical sets of null infinity
[ "Mariem Magdy Ali Mohamed" ]
gr-qc
[ "gr-qc", "math-ph", "math.MP" ]
1] Mariem Magdy Ali Mohamed [E-mail address:[email protected]] [1]School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS, United Kingdom. The calculation of the asymptotic charges at the critical sets of null infinity [ =============================================================================== The studies of the asymptotic structure at null or spatial infinity in General Relativity play different roles in the discussion of gravitational radiation, gravitational memory effect, and conserved quantities. Bondi, Metzner, and Sachs established that the asymptotic symmetry group for asymptotically simple spacetimes is the infinite-dimensional BMS group. Given that null infinity is divided into two sets: past null infinity ℐ^- and future null infinity ℐ^+, one can identify two independent symmetry groups: BMS^- at ℐ^- and BMS^+ at ℐ^+. Associated with these symmetries are the so-called BMS charges. Recently, it was suggested that the generators of BMS^- and BMS^+ and their associated charges are related via an antipodal reflection map near spatial infinity. To verify this matching, an analysis of the gravitational field near spatial infinity is required. This task is complicated due to the singular nature of spatial infinity for spacetimes with non-vanishing ADM mass. Different frameworks are introduced to address this singularity. This paper reviews two formulations and their relationship: Friedrich's cylinder at spatial infinity and Ashtekar's definition of asymptotically Minkowskian spacetimes at spatial infinity. It also reviews the role of Friedrich's cylinder at spatial infinity in the investigation of the matching of the spin-2 charges on Minkowski spacetime and in the full GR setting. § INTODUCTION In classical General Relativity (GR), isolated systems are generally described as asymptotically flat spacetimes. The BMS group <cit.>, named after Bondi, Metzner and Sachs, is the infinite-dimensional symmetry group for asymptotically flat spacetimes. A conjecture by Strominger <cit.> suggests that the asymptotic symmetries and charges for asymptotically flat spacetimes can be linked to soft theorems <cit.> and the gravitational memory effect <cit.>. This link is based on the idea that the BMS groups at past and future null infinities (BMS^- at ℐ^- and BMS^+ at ℐ^+) can be matched by an antipodal reflection map near spatial infinity. The matching of these symmetries leads to a global diagonal symmetry group in GR. Recently, Capone et al. <cit.> derived the map relating the asymptotic data and their charges in the limits of spatial infinity. This map was used in <cit.> to show that the different BMS charges defined in the literature match with the conserved charges at spatial infinity. Generically, the matching of BMS^+ and BMS^- and their associated charges requires an analysis of the gravitational field and the charges near spatial infinity. However, the standard conformal representation of asymptotically flat spacetimes is not well suited for this discussion, mainly due to the singular nature of the conformal structure near spatial infinity i^0. To this end, different formulations are used to overcome this singular behaviour <cit.>, and in recent years, numerous articles discussed the asymptotic symmetry group at spatial infinity <cit.> and their matching with the asymptotic charges at null infinities <cit.>. The notion of asymptotic flatness mentioned earlier classifies spacetimes that resemble Minkowski at large null distances, and it is not concerned with the behaviour of the gravitational field at spatial infinity. In an attempt to rectify this, the notion of asymptotically Minkowskian spacetimes at spatial infinity was introduced in <cit.>. The central idea behind this definition is that the standard conformal representation of Minkowski forces all points at spatial infinity to be mapped to a single point i^0. To resolve the structure of spatial infinity, one is compelled to give up on the idea of a conformal rescaling of the spacetime metric in order to blow up i^0 to a 3-dimensional unit timelike hyperboloid. This hyperboloid is known as the hyperboloid at spatial infinity. This idea can be carried over to more general spacetimes by introducing the asymptote at spatial infinity ℋ which acts as a timelike boundary of a 4-dimensional manifold; this is similar to Penrose's definition of asymptotic simplicity in which ℐ acts as a null boundary for the conformal manifold. Then a spacetime is said to be asymptotically Minkowskian at spatial infinity (AMSI) if the asymptote ℋ satisfies the following: i) the boundary ℋ is at infinity with respect to the physical metric, ii) the boundary ℋ is timelike, the intrinsic metric q_ab on ℋ and the normal n^a admit smooth limits to ℋ, iii) the vacuum Einstein equations holds in the limits of ℋ, and finally iv) the boundary ℋ have the topology ℝ×𝕊^2 and is geodesically complete. Another formulation used in literature <cit.> to resolve the structure of spatial infinity is Friedrich's formulation of spatial infinity originally introduced in <cit.>. The motivation for Friedrich's formulation of spatial infinity is to obtain a regular initial value problem at spatial infinity for the conformal Einstien field equations. This representation of spatial infinity is linked to the conformal properties of spacetimes, and it introduces a blow-up of the spatial infinity point i^0 to a cylinder (-1,1) ×𝕊^2 commonly known as the cylinder at spatial infinity ℐ. The cylinder ℐ touches the endpoints of past and future null infinities at the critical sets ℐ^± = {± 1 }×𝕊^2. This representation of spatial infinity is useful for relating quantities at the critical sets ℐ^± to initial data on a Cauchy hypersurface — See <cit.>. In <cit.>, we show that Ashtekar's representation of spatial infinity can be related to Friedrich's formulation, a brief discussion of this relation will be provided in this work. The purpose of this paper is to provide a streamlined presentation of the calculation of the asymptotic charges at the critical sets using Friedrich's formulation of spatial infinity. The full analysis of the asymptotic charges in a full GR setting using Friedrich's formulation will be presented elsewhere. However, the main results can be summarised as follows: For the generic initial data set given in <cit.>, the asymptotic charges associated with supertranslation symmetries at ℐ^± are well-defined if and only if the initial data satisfy extra regularity conditions. The regularity conditions can be imposed on the free conformal initial data. Finally, given initial data that satisfy the regularity conditions, the asymptotic charges at ℐ^+ are equal to the charges at ℐ^-. The structure of this paper will be as follows: In Section <ref>, a brief discussion of the relation between Ashtekar's and Friedrich's formulations of spatial infinity is presented. In Section <ref>, the calculation of the spin-2 asymptotic charges on Minkowski spacetime using Friedrich's formulation is reviewed. Finally, the tools and techniques used in the analysis of the asymptotic charges in full GR are presented in Section <ref>. §.§ Notations and conventions This article will use tensors and spinors separately in various calculations. The following indices will be used: * a, b, c,…: spacetime abstract tensorial indices. * i, j, k,…: spatial abstract indices * μ, ν,…: spacetime coordinate indices. * α, β,…: spatial coordinate indices. * 𝒜, ℬ, 𝒞, …: coordinate indices on a 2-sphere. * A, B, C,…: abstract spinorial indices. The components of a tensor T_ab with respect to a tensorial frame {_} are defined as T_ = T_ab_^a_^b. Similarly, if {, } is a spin basis defined by o^A≡_^A, ι^A≡_^A, then the components of a spinor ξ_A with respect to the spin frame {_} are given by ξ_ = ξ_A_^A. The spin basis {, } satisfies o, ι =1, where .,. is the antisymmetric product defined by ζ, λ = ζ_B λ^B = ϵ_ABζ^A λ^B. Here, ϵ_AB is the antisymmetric ϵ-spinor that can be regarded as a raising/lowering object for spinor indices. § ASHTEKAR'S AND FRIEDRICH'S FORMULATIONS OF SPATIAL INFINITY In this section, we will briefly explore the connection between Ashtekar's and Friedrich's approaches to resolving the singular behaviour of the conformal structure near spatial infinity, following <cit.>. First, the relationship between these two formulations on Minkowski spacetime will be examined. This relationship will demonstrate key differences between these formulations, and it establishes the significance of Friedrich's formulation in the calculation of the asymptotic charges near spatial infinity in the context of an initial value problem. §.§ Representations of spatial infinity in Minkowski spacetime To establish the relationship between Ashtekar's hyperboloid and Friedrich's cylinder on Minkowski spacetime, we explore different representations of spatial infinity in Minkowski spacetime, starting with the standard conformal representation of spatial infinity. §.§.§ Point compactification of Minkowski spacetime Assume (ℝ^4,) denote the Minkowski spacetime and let (x̃^μ) denotes the standard Cartesian coordinates. Then the metric can be written as = η̃_μνx̃^μ⊗x̃^ν, where η̃_μν = diag(1,-1,-1,-1). In standard spherical coordinates (t̃,ρ̃,x̃^𝒜), one has that = t̃⊗t̃ - ρ̃⊗ρ̃-ρ̃^2 , where is the standard round metric on 𝕊^2. Given that x̃^0≡t̃ and ρ̃^2 ≡ (x̃^1)^2 + (x̃^2)^2 + (x̃^3)^2, define X̃^2 ≡η̃_μνx̃^μx̃^ν = t̃^2 - ρ̃^2, then it is clear to see that spatial infinity is contained in the domain 𝒟̃ (See Figure <ref>) defined as 𝒟̃≡{ p ∈ℝ^4 | η̃_μνx̃^μ(p) x̃^ν(p) <0 }. The conformal metric = Ξ^2, with Ξ = X̃^-2 implies a point compactification of the physical spacetime (ℝ^4, ), and it can be written explicitly as = t⊗t - ρ⊗ρ - ρ^2 , where t = - t̃/t̃^2-ρ̃^2, ρ = - ρ̃/t̃^2-ρ̃^2, In this conformal representation, the conformal boundary defined by Ξ =0 can be decomposed into different sets given by ℐ̅^+≡{ p ∈ℝ^4 | t(p) >0, t(p)^2 - ρ(p)^2 =0 }, ℐ̅^-≡{ p ∈ℝ^4 | t(p) <0, t(p)^2 - ρ(p)^2 =0 }, i^0≡{ p ∈ℝ^4 | (t(p),x^1(p),x^2(p),x^3(p)) = (0,0,0,0) }, where the inverse Cartesian coordinates { x^μ} are related to {x̃^μ} by x^μ = - x̃^μ/X̃^2. We will refer to ℐ̅^+, ℐ̅^- and i^0 as future null infinity, past null infinity and spatial infinity, respectively. However, note that ℐ̅^± only denotes the parts of null infinity close to spatial infinity i^0. As mentioned earlier, this conformal representation maps all the points at infinite spatial distances in the physical spacetime (ℝ^4,) to the spatial infinity point i^0 in (ℝ^4, ). §.§.§ Ashtekar's hyperboloid at spatial infinity To obtain the hyperboloid at spatial infinity, start with the Minkowski metric in hyperbolic coordinates (ψ, χ, θ, ϕ) = - ψ⊗ψ + ψ^2. Here, is the 3-metric on a unit timelike hyperboloid defined as ≡χ⊗χ - cosh^2χ. Introduce the inverse radial coordinate ζ≡ 1/ψ so that = - 1/ζ^4ζ⊗ζ + 1/ζ^2. Then, consider the conformal factor H = ζ and define = H^2. Then, can be written as = - 1/ζ^2ζ⊗ζ + Observe that is singular at ζ=0 while the intrinsic 3-metric of the unit timelike hyperboloid at ζ=0 is well-defined and given by =. In the following, the timelike hyperboloid with ζ=0 will be denoted by ℋ_ℝ^4. Then (ℋ_ℝ^4, ) will be referred to as the hyperboloid at spatial infinity. §.§.§ Friedrich's cylinder at spatial infinity A different representation of spatial infinity can be obtained by defining a new time coordinate τ = t/ ρ and the rescaling = 1/ρ^2, so that = τ⊗τ + τ/ρ (τ⊗ρ + ρ⊗τ) - (1-τ^2)/ρ^2ρ⊗ρ - . From this, we can also write = Θ^2 , Θ = ρ (1-τ^2). Again, one sees that the spacetime metric is singular at ρ=0 while the intrinsic metric on the ρ=0 hypersurface is well-defined and given by = τ⊗τ - . Given the above, define the conformal extension (ℳ, ) with ℳ≡{ p ∈ℝ^4| -1 ≤τ(p) ≤ 1, ρ(p) ≥ 0 }, then introduce the following subsets of the conformal boundary (Θ =0) — see Figure <ref>. ℐ^±≡{ p∈ℳ|τ(p) =± 1 }, past and future null infinity ℐ≡{ p ∈ℳ| |τ(p)|<1, ρ(p)=0}, the cylinder at spatial infinity ℐ^±≡{ p∈ℳ|τ(p)= ± 1, ρ(p)=0 }, the critical sets at of null infinity and ℐ^0≡{ p ∈ℳ|τ(p)=0, ρ(p)=0}, where ℐ^0 is the intersection of ℐ with the initial hypersurface 𝒮_*≡{τ = 0 }. In subsequent discussions, we will refer to (ℐ,) as the cylinder at spatial infinity. §.§.§ The relation between Ashtekar's hyperboloid and Friedrich's cylinder This section aims to show that Ashtekar's and Friedrich's formulations are conformally related. To achieve this, we need to demonstrate that (ℋ_ℝ^4, ) and (ℐ, ) are conformally related, as well as the spacetime metrics and . Since the expressions for these metrics are explicit, obtaining a conformal factor relating these two constructions is straightforward. Nevertheless, dismissing this discourse as insignificant would be unwise, as it offers valuable insights into similar computations in more general spacetimes. Introducing the coordinate transformation χ = coshχτ, and substituting in , we get = cosh^2χ( τ⊗τ - ). This immediately indicates that ≡ and are conformally related, and the conformal factor is given by ω = coshχ. Thus, we have = ω^2 . In order to relate Ashtekar's and Friedrich's constructions in a neighbourhood of spatial infinity, introduce the coordinate transformation ρ = ζcoshχ, τ = tanhχ, Then, given that = H^2 and = Θ^2, the conformal relation between and is given by = ϖ^2 , where ϖ = H Θ^-1 = coshχ. Given that χ = arctanhτ, it can be shown that ϖ = 1/√(1-τ^2). From the above discussion, we conclude * The conformal factor ϖ gives rise to a conformal representation that does not reach null infinity. This is demonstrated by the fact that ϖ→∞ as τ→± 1. * The hyperboloids of constant ζ never reaches the conformal boundary. This can be seen by considering the hyperboloids of constant ζ on the (τ, ρ) plane. Given (<ref>), we have (τ, ρ) → (± 1 , ∞) as χ→±∞. In <cit.>, the relation between Ashtekar's and Friedrich's formulations of spatial infinity is analysed for spacetimes that satisfy Ashtekar's AMSI definition <cit.>. As mentioned earlier, this definition introduces the concept of an asymptote as spatial infinity ℋ which is a generalisation of the hyperboloid at spatial infinity ℋ_ℝ^4. Specifically, we show that in order to obtain a conformal factor relating ℋ and ℐ, and the spacetime metrics in a neighbourhood of spatial infinity, one has to impose extra conditions to ensure the regularity of the spacetime metric in the sets where spatial infinity touches null infinity. Given these assumptions, the main steps are somewhat similar to the analysis on Minkowski spacetime. First, for the intrinsic metrics on ℋ and ℐ, we have the following result The metric of an asymptote ℋ satisfying Ashtekar's AMSI definition is conformally related to the standard metric of Friedrich's cylinder . For the spacetime metrics, one can construct a conformal Gaussian coordinate system in a small neighbourhood of ℋ. This conformal Gaussian gauge defines a conformal factor that gives the relation between Ashtekar's and Friedrich's representations of spatial infinity. § THE SPIN-2 ASYMPTOTIC CHARGES ON MINKOWSKI SPACETIME In this section, a brief review of the calculations of the asymptotic charges of the spin-2 field at the critical of null infinity on Minkowski spacetime is presented following <cit.>. §.§ The Minkowski spacetime in the F-gauge It will be convenient to introduce a frame basis {_} adapted to Friedrich's cylinder at spatial infinity on Minkowski spacetime, the so-called F-gauge frames. Start with the Minkowski metric given by (<ref>). It is straightforward to see that the metric on the hypersurfaces 𝒬_τ,ϱ of constant ρ and τ is the standard metric on 𝕊^2. Then, introduce the complex null frame {_+, _-} on 𝒬_τ,ρ and propagate {_+, _-} off 𝒬_τ,ρ by imposing [ _τ, _± ] =0, [ _ρ, _± ]=0. Now, the F-gauge frames {_'} and their dual {^'} can be defined as follows _' = √(2)/2( (1-τ) _τ + ρ_ρ), ^' = √(2)/2( τ - 1/ρ (1-τ) ρ), _' = √(2)/2( (1+τ) _τ - ρ_ρ), ^' = √(2)/2( τ + 1/ρ (1+τ) ρ), _' = √(2)/2_+, ^' = √(2)^+, _' = √(2)/2_-, ^' = √(2)^-, where _' is obtained from _ by contraction with the Infeld-van der Waerden symbols σ^_'. So, _' = σ^_'_. The dual frames ^± satisfy ⟨^+,_+⟩ =1, ⟨^-,_-⟩ =1. In terms of the above frame fields, the metric can be written as = ϵ_ϵ_' '^'⊗^'. §.§ The spin-2 charges in the F-gauge The goal of this section is to obtain an expression of the charges that can be evaluated at the critical sets ℐ^±. Generally, the expressions for the asymptotic charges are written in terms of frames adapted to null infinity and that satisfy the Newman Penrose gauge conditions. In subsequent discussions, these frames will be referred to as the NP-gauge frames and will be denoted by {^∙_}. Now, introduce the NP null tetrad (l^a,n^a,m^a, m̅^a) satisfying the NP gauge conditions and define ^∙_' = σ^_'^∙_ such that l^a≡^∙_', n^a≡^∙_', m^a≡^∙_', m̅^a≡^∙_'. Let W^∙_abcd denotes a Weyl-like tensor and define 𝒲^∙_abcd as 𝒲^∙_abcd≡ W^∙_abcd + i (^*W)^∙_abcd, where (^*W)^∙_abcd is the left Hodge dual of W^∙_abcd. Then, the spinorial counterpart of W^∙_abcd can be decomposed in terms of the symmetric spin-2 spinor ψ^∙_ABCD as W^∙_AA'BB'CC'DD' = -ψ^∙_ABCDϵ^∙_A'B'ϵ^∙_C'D' - ψ^∙_A'B'C'D'ϵ^∙_ABϵ^∙_CD. Now consider the asymptotic charges associated with smooth functions λ on 𝕊^2. For each λ, the asymptotic charges on some cross-section 𝒞 of ℐ^± is given by 𝒬 = ∫_𝒞λ𝒲^∙_abcd l^a n^b m^cm̅^d S. From (<ref>) and (<ref>), it can be shown that the charges 𝒬 can be written as 𝒬 = - 2 ∫_𝒞λψ̅^∙_2S, where ψ̅^∙_2≡ψ̅^∙_' ' ' '. To evaluate the charges at the critical sets, one must obtain an expression for 𝒬 in terms of the F-gauge frames. The transformation from the NP-gauge frames to the F-gauge frames in Minkowski spacetime <cit.> implies that ψ^∙_2 = ψ_2. Thus, the final expression of the charges in the F-gauge is given by 𝒬 = - 2 ∫_𝒞λψ̅_2S. To evaluate this expression of the charges at ℐ^±, the next step is to obtain a solution for ψ̅_2 using the field equations. §.§ The spin-2 field equations The spin-2 field equation can be written as a wave equation □ψ_ABCD =0, where □≡∇_AA'∇^AA' is the D'Alembertian operator. To analyse the solutions for this equation in a neighbourhood of spatial infinity, assume that the components ψ_n of the spin-2 spinor can be expanded near ρ=0 in terms of spin-weighted spherical harmonics _nY_l,m as ψ_n = ∑_l=|2-n|^∞∑_m=-l^l a_n;l,m(τ) _2-nY_l,m + o_1(ρ), for n =0, …,4. where a_n;l,m: ℝ→ℂ. Using (<ref>) and substituting in (<ref>), one obtains second order ordinary differential equations for the coefficients a_n;l,m(τ) (1-τ^2) ä_0;l,m + 2 (2-τ) ȧ_0;l,m + l (l+1) a_0;l,m =0, (1-τ^2) ä_1;l,m + 2(1-τ) ȧ_1;l,m +l (l+1) a_1;l,m =0, (1-τ^2) ä_2;l,m - 2τȧ_2;l,m + l (l+1) a_2;l,m =0, (1-τ^2) ä_3;l,m -2(1+τ) ȧ_3;l,m + l (l+1) a_3;l,m=0, (1-τ^2) ä_4;l,m - 2(2+τ) ȧ_4;l,m + l (l+1) a_4;l,m=0. Now, assume that the initial data (ψ_n)|_𝒮_* on 𝒮_*≡{τ=0 } can be expanded near ρ=0 as ψ_n|_𝒮_* = ∑_l=|2-n|^∞∑_m=-l^l a_n;l,m(0) _2-n Y_l,m + o(ρ). Given that the expression of the charges (<ref>) is written in terms of ψ_2, one only requires the solution for (<ref>) in order to evaluate 𝒬 at τ = ± 1. It can be shown that for l ≥ 0 and -l ≤ m ≤ l, the solution a_2;l,m is given by a_2;l,m = A_l,m P_l(τ) + B_l,m Q_l(τ), where P_l(τ) is the Legendre polynomial of order l and Q_l(τ) is the Legendre function of the second kind of order l. The constants A_l,m and B_l,m can be expressed in terms of the initial data for a_2;l,m. It is obvious that the solution (<ref>) diverges logarithmically near τ = ± 1, unless B_l,m=0. To obtain a regular solution for a_2;l,m at the critical sets, the constant B_l,m is required to vanish. In fact, the following regularity conditions ensure that the charges 𝒬 are well-defined at the critical sets The solution (<ref>) is regular at ℐ^± if and only if: * a_2;l,m(0)=0 for even l, and * ȧ_2;l,m(0) =0 for odd l. These regularity conditions can be expressed in terms of freely specifiable data as shown in <cit.>. Making use of (<ref>), (<ref>) and (<ref>) and by choosing initial data satisfying Lamma <ref> and λ = Y_l,m, the charges at ℐ^± can be written as 𝒬|_ℐ^± = 2 (l+1) Q_l+1(0) (a_2)_* for even l ≥ 0, ±√(l(l+1)) Q_l(0) ( (a_1)_* - (a_3)_* ) for odd l. The main conclusions from the above discussion are * For generic boosted initial data, the charges 𝒬 are not well-defined in the limits of spatial infinity, i.e., at the critical sets ℐ^±. * A generic boosted initial data satisfying Lemma <ref> allows us to obtain regular expressions for 𝒬 at the critical sets. * The matching of the charges is obtained naturally in this formalism. In particular, we have 𝒬|_ℐ^+ = (-1)^l𝒬|_ℐ^-. § THE ASYMPTOTIC CHARGES IN FULL GR The process of the calculation of the asymptotic charges for the spin-2 field at the critical sets presented in the previous section can be extended to the full GR setting. For this, assume that (ℳ̃,) is a spacetime satisfying the vacuum Einstein field equations i.e. R̃_ab =0, where R̃_ab is the Ricci tensor associated with the Levi-Civita connection of . The conformal rescaling = Ξ^2, implies transformation laws for the physical fields e.g. the curvature tensor R̃^a_bcd, the Schouten tensor L̃_ab etc. It follows that the vacuum Einstein field equations are not conformally invariant and that the field equations implied by (<ref>) can not be analysed at the conformal boundary Ξ=0 since the conformal Ricci tensor R_ab is singular at the points where Ξ=0. If C̃^a_bcd denotes the Weyl tensor, then the Bianchi identity can be written in terms of the Levi-Civita connection associated with as ∇_a ( Ξ^-1C̃^a_bcd) =0. If one defines the rescaled Weyl tensor d^a_bcd≡Ξ^-1C̃^a_bcd, then equation (<ref>) can be written as ∇_a d^a_bcd =0. Exploiting the symmetries of the rescaled Weyl tensor implies ∇_[e d^a_|b|cd] =0. Our calculations of the asymptotic charges rely on the extended conformal field equations written in terms of a Weyl connection satisfying ∇̂_a g_bc = -2 f_a g_bc, where f_a is an arbitrary 1-form. The explicit form of these equations will not be necessary for this article, interested readers can refer to Chapter 8 in <cit.>. The extended conformal field equations yield differential equations to be solved for the -orthonormal frame fields {_}, the components of the Weyl connection coefficients Γ̂_^_, the Schouten tensor L̂_ and the rescaled Weyl tensor d^_. One significant feature of the extended conformal field equations is that they exhibit gauge freedom indicated by the fact that there are no equations to fix the conformal factor Ξ and the Weyl connection . To fix this gauge freedom, one can make use of the so-called conformal Gaussian gauge, based on conformal geodesics, that allows us to write the field equations as a symmetric hyperbolic system in which the evolution equations reduce to a transport system along the conformal geodesics. Given the field equations in this gauge, it is possible to obtain a spinorial version of these equations to be analysed near spatial infinity. The above discussion highlights the essence of conformal methods in GR. The following section will introduce Friedrich's regular initial value problem for the conformal field equations. §.§ Friedrich's regular initial value problem The purpose of this section is to briefly introduce Friedrich's formulation in full GR. As mentioned in the introduction, the aim of Friedrich's formulation is to introduce a regular initial value problem for the conformal field equations near spatial infinity. An extensive discussion of this framework is provided in <cit.>. In this framework, the spacetime (ℳ̃,) is assumed to be the development of some asymptotically Euclidean and regular <cit.> initial data (𝒮̃,, ). In the following, we specify initial data satisfying the Hamiltonian and momentum constraints as introduced in <cit.>. In particular, we have For any α, β∈ C^2(𝕊^2), there exists a vacuum initial data set (, K̃) such that the components of and K̃ with respect to the standard Euclidean coordinate chart { x^α} have the following asymptotics: h̃_αβ = -δ_αβ - 1/r[ ( A - α/2) δ_αβ + αx_α x_β/r^2] + O_2 (r^-2), K̃_αβ = 1/r^2[ - 1/2βδ_αβ + 1/r( - B_α x_β - B_β x_α + (B^γ x_γ) δ_αβ) + βx_α x_β/r^2] + O_1 (r^-3) where A, { B_α}_α=1^3 are some constants and r= √((x^1)^2 + (x^2)^2 + (x^3)^2). Now, let (𝒮',') be a 3-dimensional smooth compact manifold with an asymptotic end i ∈𝒮'. Then (𝒮̃,) is an asymptotically Euclidean and regular manifold if there exists a diffeomorphic map Φ from 𝒮'∖{ i } onto 𝒮̃ and a conformal factor Ω' which is analytic on 𝒮' and satisfying i) Ω'=0, Ω'=0 and Hess(Ω')=-2 ' at i, ii) Ω'>0 on 𝒮' ∖{ i}, iii) ' = Ω'^2 Φ_* on 𝒮' ∖{ i}. To apply this, define the inverse coordinates { y^α} and the conformal factor Ω' as y^α = - x^α/r^2, Ω' = ϱ^2/√(1 + A ϱ), so that the components of the conformal initial data ' = Ω'^2 and ' = Ω' can be expanded around ϱ = √((y^1)^2 + (y^2)^2+(y^3)^2)= 0 as h'_αβ = -δ_αβ - αϱ( y_α y_β/ϱ^2 -1/2δ_αβ) + O_2 (ϱ^2), K'_αβ = - β/2δ_αβ - 1/ϱ( B_α y_β + B_β y_α+ 1/2 (B^γy_γ) δ_αβ) + ( β - 4 (B^γy_γ)/ϱ) y_α y_β/ϱ^2 + O_1(ϱ), The O(ϱ) term in (<ref>) can be made to vanish by performing a coordinate transformation from { y^α} to normal coordinates { z^α} <cit.>. Then, the term O(ϱ^2) can be removed by performing a further conformal transformation Ω' →Ω≡ϖΩ', where ϖ≡ e^f, with f= 1/2 l'_αβ(i) z^α z^β. Here, l'_αβ(i) denotes the components of the Schouten tensor associated with ' in normal coordinates { z^α} evaluated at i (ϱ =0). If h'^(0)_αβ is the metric at i and |z|^2≡ h'^(0)_αβ z^α z^β, then the components of conformal initial data = ϖ^2 ' and = ϖ' can be written as h̅_αβ = - δ_αβ + O(|z|^3). K̅_αβ = - β/2δ_αβ - 1/2( B_αϑ_β + B_βϑ_α + 1/2 (B^γϑ_γ) δ_αβ) + βϑ_αϑ_β - 4 (B^γϑ_γ) ϑ_αϑ_β + O(|z|). where ϑ^α = z^α/|z|. The initial data (, ) will be referred to as the conformal normal initial data. It can be shown, using the conformal constraint equations, that the initial data for the components of the conformal Schouten tensor L̅_αβ and the electric and magnetic parts of the Weyl tensor, d̅_αβ and d̅_αβγ, respectively, are singular at |z|=0. To introduce regular initial data, one must introduce a conformal rescaling as suggested in <cit.> Ω→κ^-1Ω, with κ = O(|z|). Let ρ = |z|, then the conformal factor can be expanded around ρ=0 as Ω = ρ^2 + 1/6Π_3[Ω] ρ^3 + O(ρ^4), where Π_3[Ω] is written in terms of the angular coordinates ϑ^α, the constant A, the function α and its angular derivatives. The conformal rescaling (<ref>) introduces the conformal metric = κ^-2. Then, if {_} is an -orthonormal frame, one can show h_ = - δ_ + O(|z|^3), and K_ = O(|z|), L_ = O(|z|), d_ = O(1), d_ = O(1). Hence, the conformal initial data for the conformal field equations are regular at |z|=0. One of the advantages of using the conformal Gaussian gauge mentioned in the last section is that it implies a conformal factor Θ that can be written in terms of initial data. Following <cit.>, we have Θ = κ^-1Ω( 1 - τ^2κ^2/ω), where τ refers to the parameter along the conformal geodesics used to construct the conformal Gaussian system and ω = 2 Ω/√(|(Ω,Ω)|). In Friedrich's formulation, the blow-up of the point i to a 2-dimensional sphere is achieved by considering the bundle of normalised spin frames SU(𝒮') with structure group SU(2,ℂ) at i. The spin frames _(), with ∈SU(2,ℂ), can be extended to an open ball B_a(i) in 𝒮' of radius a centered at i by parallel propagation along an -geodesic starting at i. If ρ is the affine parameter along the geodesic, then for a fixed , the propagated spin frame can be written as _(ρ, ). Now, define ℳ_a,κ, a submanifold of ℝ×ℝ×SU(2,ℂ) as ℳ_a,κ = { (τ, ρ, ) ∈ℝ×ℝ× SU(2,ℂ)| 0 ≤ρ < a, - ω/κ≤τ≤ω/κ}, with the following subsets ℐ^±_a = { (τ, ρ, ) ∈ℳ_a,κ | 0 <ρ < a, τ = ±ω/κ}, past and future null infinity ℐ = { (τ, ρ, ) ∈ℳ_a,κ | ρ =0, -1 <τ <1 }, the cylinder at spatial infinity ℐ^± = { (τ, ρ, ) ∈ℳ_a,κ | ρ =0, τ = ± 1 }, the critical sets of null infinity and ℐ_0 = { (τ, ρ, ) ∈ℳ_a,κ | ρ =0, τ = 0 }. To relate the structures on the fibre bundle to the spacetime manifold (ℳ̃, ) satisfying (<ref>). Let (ℳ, ) denote a smooth conformal extension such that i) Θ > 0 and = Θ^2 on ℳ̃, ii) Θ = 0 and d Θ≠ 0 on ℐ^±_a. Now let 𝒩⊂ℳ denote the domain of influence of B_a(i) ∖{ i }, then the projection map π̅' from ℳ_a,κ to 𝒩 can be factored as ℳ_a,κℳ'_a,κ𝒩, where ℳ'_a,κ≡ℳ_a,κ / U(1) is implied by the action of U(1) on SU(2,ℂ). Finally, note that the spin frames _(ρ, ) can be extended to the spacetime ℳ_a,κ by a certain propagation law along the conformal geodesics orthogonal to the 𝒮_a, where 𝒮_a can be thought of as the initial hypersurface on ℳ_a,κ, i.e. 𝒮_a = { (ρ, ) ∈ℝ× SU(2,ℂ)| 0 ≤ρ < a }. The propagated spin frames _(τ, ρ, ) are determined at a p ∈ℳ_a,κ∖ (ℐ∪ℐ^+∪ℐ^-) up to a multiplication factor corresponding to the action of U(1) on SU(ℳ). §.§ The supertranslation asymptotic charges in full GR Let d^∙_abcd denote the rescaled Weyl tensor in the NP-gauge, the spinorial counterpart d^∙_AA'BB'CC'DD' can be decomposed as follows d^∙_AA'BB'CC'DD' = - ϕ^∙_ABCDϵ^∙_A'B'ϵ^∙_C'D' - ϕ̅^∙_A'B'C'D'ϵ^∙_ABϵ^∙_CD, where ϕ^∙_ABCD is a symmetric valence 4 spinor. Given the above, the asymptotic charges associated with smooth functions f on 𝕊^2 can be written as 𝒬(f;𝒞) ≡∮_𝒞ε_2 f (𝒫^∙ - i (*𝒫^∙) +12σ^∙ abN^∙_ab), where 𝒞 is some cross-section of ℐ^± and ε_2 is its area element, σ^∙ ab is the shear tensor, N^∙_ab is the news tensor and 𝒫^∙≡ d^∙_abcd l^a n^b l^c n^d, (*𝒫^∙) ≡ (*d^∙)_abcd l^a n^b l^c n^d. Using (<ref>) and (<ref>), one gets 𝒫^∙ - i (*𝒫^∙) = - 2 ϕ̅^∙_2. Moreover, the term involving σ^∙ abN^∙_ab can be written in terms of the NP-connection coefficients <cit.>. The explicit form depends on whether we're considering the asymptotic charges at ℐ^+ or ℐ^-. In particular, σ^∙ ab N^∙_ab = 2 Δ |σ^∙|^2 -|σ^∙|^2 (3μ^∙+3μ̅^∙+γ^∙+γ̅^∙), on ℐ^+ σ^∙ abN^∙_ab = 2 Δ |λ^∙|^2 - |λ^∙|^2 (3ρ^∙+3ρ̅^∙+ϵ^∙+ϵ̅^∙), on ℐ^-. Here, Δ≡ n^a∇^∙_a and σ^∙, μ^∙, γ^∙, λ^∙,ρ^∙, ϵ^∙ are the NP-connection coefficients defined as σ^∙≡ -Γ^∙_'^_, μ^∙≡ -Γ^∙_'^_, γ^∙≡Γ^∙_'^_, λ^∙≡Γ^∙_'^_, ρ^∙≡ -Γ^∙_'^_, ϵ^∙≡Γ^∙_'^_. To evaluate the expression of the charges (<ref>) at the critical sets ℐ^±, one must find a transformation between the NP-gauge frames and the F-gauge frames in full GR. Following <cit.>, a general transformation between a NP-gauge spin frame {^∙_} and an F-gauge spin frame {_} is parameterised by a conformal factor θ and an SL(2,ℂ) transformation matrix Λ^_ ^∙_ = θ^-1/2Λ^__, implying transformations for ϕ̅^∙_2 and the NP-connection coefficients (<ref>). The expressions for these will not be presented here. As we are interested in evaluating the expressions of the charges at ℐ^±, an asymptotic solution for the conformal field equations is analysed, given the initial data prescribed in the previous section. Given the zero-order solution, asymptotic expansions for the conformal factor θ and the transformation matrices Λ^_ are obtained, following <cit.>. If ϕ_0, ϕ_1, ϕ_2, ϕ_3, ϕ_4 denote the components of the rescaled Weyl tensor in the F-gauge. The explicit transformation from the NP-gauge to the F-gauge implies * Contributions to 𝒬|_ℐ^± from ϕ_0, ϕ_1, ϕ_3, ϕ_3 are at higher order. * The background term σ^∙ ab N^∙_ab does not contribute to 𝒬|_ℐ^± at zero order. Hence, the asymptotic charges at ℐ^± are determined by f and the zero-order solution of ϕ_2, i.e., 𝒬|_ℐ^± = 𝒬|_ℐ^± (f, ϕ_2^(0)). § CONCLUSIONS This article addresses the matching of the asymptotic charges associated with supertranslation symmetries in the context of an initial value problem using Friedrich's formulation of spatial infinity. The results in this paper demonstrate that the zero-order solution of ϕ_2 develops logarithmic singularities at ℐ^± given the prescribed initial data in Section <ref>. Therefore, 𝒬|_ℐ^± are only well-defined if extra regularity conditions are imposed on our initial data. An upcoming article will present the explicit form of these regularity conditions. A significant consequence of this result is that identifying a global symmetry group for generic asymptotically flat spacetimes is not feasible unless these spacetimes are the development of initial data satisfying certain regularity conditions. unsrt
http://arxiv.org/abs/2307.02921v1
20230706111859
Memories from the W Boson Discovery
[ "Claudia-Elisabeth Wulz" ]
hep-ex
[ "hep-ex", "physics.hist-ph" ]
unsrt #1#2#3#4#1 #2, #3 (#4) Nuovo Cimento Nucl. Instrum. Methods Nucl. Instrum. Methods A Nucl. Phys. B Phys. Lett. B Phys. Rev. Lett. Phys. Rev. D Z. Phys. C ϵ^' ε → π^+π^-γ p K^0 K̅^̅0̅ α α̅ CP-1.80em/ Institute of High Energy Physics of the Austrian Academy of Sciences, Nikolsdorfergasse 18, 1050 Vienna, AustriaMemories from the W Boson Discovery Claudia-Elisabeth Wulz August 1, 2023 =================================== The fascinating story of a major discovery at CERN is outlined. The bold decision to convert its most powerful, and only recently inaugurated, proton accelerator to a proton-antiproton collider led to the discovery of the W and Z bosons – mediators of the weak interaction – in a record time, at the experiments UA1 and UA2. The decisive roles of Carlo Rubbia and Simon van der Meer, who received the 1984 Nobel Prize for physics, are underlined. § INTRODUCTION About 40 years ago, the time was ripe for a crucial discovery to confirm a cornerstone of the standard model of particle physics. Direct evidence for neutral currents had already been found with neutrino interactions recorded at the Gargamelle bubble chamber at CERN in 1973 <cit.>. The measurement of cross-section ratios between neutral and charged current interactions enabled the setting of limits for the Weinberg weak mixing angle, θ_W, which is in turn related to the mass of the W boson, M_W, through the formula M_W = √(πα/√(2) G_F)1/sinθ_W≈37 GeV/sinθ_W. By the end of the 1970's M_W was thus predicted to be approximately in the range 60 to 80 GeV. CERN's Super Proton Synchrotron (SPS) had just begun its single-beam operation in 1976. In the same year, Rubbia, McIntyre, and Cline proposed to transform a conventional accelerator to a proton-antiproton collider <cit.>, in order to be able to reach the energy threshold and the necessary luminosity to produce W and Z events. The rest is history. The decision to convert the SPS into the Super Proton-Antiproton Synchrotron (SppS) was made in 1978. Within only three years, the first collisions were recorded at a centre-of-mass energy √(s) = 540 GeV. By the end of 1982, enough data had been accumulated to enable the discovery of the W, and its announcement came in 1983. § THE SUPER PROTON-ANTIPROTON SYNCHROTRON AND THE EXPERIMENTS The transformation of the SPS into the SppS was no small feat. It relied on a key technique to produce and store dense beams of protons and antiprotons – stochastic cooling, invented by Simon van der Meer <cit.>. It reduces the energy spread and angular dispersion of particle beams. The principle is illustrated in Fig. <ref> for horizontal oscillations around the nominal orbit. A pick-up electrode samples the distance of the centre-of-gravity of a group of particles from this orbit. The corresponding signal is amplified and sent through an approximately diagonal line to a kicker. The signal path is shorter than the particle path, and the kicker is thus able to apply an electric field when the particles pass in order to correct the deviation. The Initial Cooling Experiment (ICE) demonstrated that momentum cooling could indeed be achieved. Out of 240 antiprotons that were cooled, 80 still circulated four days later. This convinced CERN to go ahead with the SppS project, and the associated construction of the Antiproton Accumulator <cit.>. Two experiments were conceived and built within a very short time, in underground areas along the SppS tunnel. UA1 <cit.> and UA2 <cit.> were both approved in 1978, and started data taking in 1981. While UA1 was the first “hermetic" multi-purpose detector, UA2 was more limited in scope, particularly because it did not have a muon system. Both experiments were designed to detect electrons, photons, hadrons, and neutrinos, using the “missing energy" technique for their identification. UA1 alse detected muons. Figure <ref> shows the setup of the two detectors. UA1's central detector was the largest drift chamber of its time. It was surrounded by a lead-scintillator electromagnetic calorimeter. A warm dipole magnet provided a flux density of 0.7 Tesla. The instrumented return yoke served as a hadron calorimeter, made of iron-scintillator plates. Large muon chambers and forward calorimetry up to 0.2^0 from the beam line completed the experiment. UA2 was optimized for the detection of electrons from W and Z decays, but was also well suited to measure jets. Its vertex detector was made of cylindrical tracking chambers, followed by a preshower detector consisting of a tungsten converter and a multi-wire proportional chamber for electron identification, and a highly-granular calorimeter, made of a lead-scintillator electromagnetic and an iron-scintillator hadronic section. The central calorimetry had a spherically projective geometry, and was complemented by forward calorimetry, which initially covered a region up to 20^0 from the beam axis. It was extended to 5^0 by the end of 1985. Toroid coils provided a magnetic field in the forward regions, where the W decay asymmetry is maximal. § THE PATH TO THE DISCOVERY OF THE W BOSON The first physics data came in 1981, with UA1 recording collisions already in July, and UA2 in December of that year. The latter data taking period was referred to as the “jet run", and resulted in about 20 μb^-1 of integrated luminosity <cit.>. During this time UA1 focused more on the tracker, whereas UA2 concentrated on calorimetry. UA2 thus presented better results on back-to-back 2-jet events at the 21^st International High Energy Physics Conference in Paris in Aug. 1982 <cit.>. In early Nov. 1982, a W → e ν_e candidate with an isolated electron and missing energy was found in UA1, but there was some hadronic activity. A few days later, a “gold-plated" candidate was found. By the end of 1982, with 28 nb^-1 integrated luminosity achieved, 6 W events were found in UA1 <cit.>, and 4 in UA2 <cit.>. The characteristic properties per event were an isolated electron with high transverse momentum, a large amount of missing energy, and a Jacobean peak at M_W/2 in the distribution of the electron transverse momentum of all events. In UA1, two search methods were used, applying a method with a stringent electron selection on one hand, and another one called the “Saclay missing energy method" <cit.>. Both methods led to the same 5 events, and eventually 6 events from the first method made it to the UA1 W discovery paper. It should be noted that only the W decay channel into electrons contributed to the discovery. The events leading to the announcement of the discovery were the following. On 12 Jan. 1983, Carlo Rubbia and Pierre Darriulat made presentations at the Third Topical Workshop on pp Collider Physics in Rome, entitled “Jets, large p_T, etc." and “Preliminary searches for hadron jets and for large transverse momentum electrons at the SPS pp collider", respectively. These titles were obviously intentionally cryptic, but the seminars given at CERN by Carlo Rubbia and Luigi Di Lella on 20 and 21 Jan. 1983, respectively, left no doubt that the discovery of the W boson was now history. The CERN press conference of 25 Jan. 1983 contained the statement “This is indeed a major step forward in contemporary physics". The mass of the W boson was measured as depicted in Fig. <ref>. UA1 quoted a value of M_W = 81 ±5 GeV, whereas UA2 quoted M_W = 80 + 10 - 6 GeV, where the latter uncertainties refer to statistical and systematic uncertainties, respectively. A beautiful W event recorded by UA1 is shown in Fig. <ref>. The stiff electron track is indicated by the white trajectory and pink arrow. The missing transverse energy distribution for W signal (black boxes) and background events is also shown. § CONCLUSIONS, OUTLOOK, AND ACKNOWLEDGMENTS The 1984 Nobel Prize for Physics was awarded to Carlo Rubbia and Simon van der Meer “for their decisive contributions to the large project which led to the discovery of the field particles W and Z, communicators of weak interaction”. I had the privilege to be part of this adventure, and am most grateful to Carlo for welcoming me to UA1 in 1982 already as a summer student, and later as a fellow. His charisma and courage to strive for the impossible have been inspiring to me, and crucial to bring this endeavour to a more than successful conclusion. He anticipated it by saying "Se sono rose, fioriranno" (referring to the W events: “If they are roses, they will flower") at the Rome conference. Since then, precise measurements of the W mass have been underway. Recently the ATLAS Collaboration has published a measurement that is compatible with the standard model <cit.>, but there are also contradictory results by the CDF Collaboration <cit.>, which have sparked interest. The future will tell us if surprises are in store. I would like to thank the organizers for a wonderful conference, as always, and in particular Boaz Klima, for inviting me to give this presentation. § REFERENCES 99NeutralCurrents F. J. Hasert et al., https://www.sciencedirect.com/science/article/abs/pii/0370269373904942461211973 and https://www.sciencedirect.com/science/article/abs/pii/0370269373904991461381973RubbiaMcIntyreCline C. Rubbia, P. McIntyre, and D. Cline, in https://link.springer.com/chapter/10.1007/978-3-322-90614-4_67 Proc. International Neutrino Conference, Aachen 1976, editors H. Faissner, H. Reithler, and P. Zerwas, Vieweg, Braunschweig (1977), 683VdM S. van der Meer, https://cds.cern.ch/record/312939CERN-ISR-PO 72-31 (1972) and https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.57.689 Rev. Mod. Phys. 57, 689 (1985)Moehl D. Möhl, https://inis.iaea.org/collection/NCLCollectionStore/_Public/16/042/16042314.pdfhttps://inis.iaea.org/collection/NCLCollectionStore/_Public/16/042/16042314.pdfKoziol H. Koziol, D. Möhl, https://www.sciencedirect.com/science/article/abs/pii/S0370157304003503 Phys. Rep. 403-404, 91 (2004)UA1proposal A. Astbury et al., https://cds.cern.ch/record/319371CERN/SPSC/78-6/SPSC P92 (1978)UA2proposal M. Banner et al., http://cds.cern.ch/record/596804CERN/SPSC/78-8/SPSC P93 (1978)DenegriCERNCourier D. Denegri, https://cerncourier.com/a/when-cern-saw-the-end-of-the-alphabet/https://cerncourier.com/a/when-cern-saw-the-end-of-the-alphabet/Repellin J. P. Repellin, on behalf of the UA2 Collaboration, http://cds.cern.ch/record/142160http://cds.cern.ch/record/142160, https://doi.org/10.1051/jphyscol:1982376https://doi.org/10.1051/jphyscol:1982376WdiscoveryUA1 UA1 Collaboration, G. Arnison et al., https://www.sciencedirect.com/science/article/abs/pii/03702693839117721221031983WdiscoveryUA2 UA2 Collaboration, M. Banner et al., https://www.sciencedirect.com/science/article/abs/pii/03702693839160524761031983WeventCDShttp://cds.cern.ch/record/39467http://cds.cern.ch/record/39467ATLASWmass2023 ATLAS Collaboration, ATLAS-CONF-2023-004, http://cds.cern.ch/record/2853290/files/ATLAS-CONF-2023-004.pdfhttp://cds.cern.ch/record/2853290/files/ATLAS-CONF-2023-004.pdfCDF CDF Collaboration, T. Aaltonen et al., https://www.science.org/doi/10.1126/science.abk1781Science 376, 170 (2022)
http://arxiv.org/abs/2307.02511v1
20230705100750
Diffusion Models for Computational Design at the Example of Floor Plans
[ "Joern Ploennigs", "Markus Berger" ]
cs.LG
[ "cs.LG", "cs.AI" ]
inst1]Joern Ploennigs inst1]Markus Berger [inst1]organization=University of Rostock, AI for Sustainable Construction, addressline=Justus-v.-Liebig-Weg 6, city=Rostock, postcode=18059, country=Germany AI Image generators based on diffusion models are widely discussed recently for their capability to create images from simple text prompts. But, for practical use in civil engineering they need to be able to create specific construction plans for given constraints. Within this paper we explore the capabilities of those diffusion-based AI generators for computational design at the example of floor plans and identify their current limitation. We explain how the diffusion-models work and propose new diffusion models with improved semantic encoding. In several experiments we show that we can improve validity of generated floor plans from 6 % to 90 % and query performance for different examples. We identify short comings and derive future research challenges of those models and discuss the need to combine diffusion models with building information modelling. With this we provide key insights into the current state and future directions for diffusion models in civil engineering. diffusion modelsdeep learningcomputational designbuilding information modelling Diffusion Models for Computational Design at the Example of Floor Plans [ June 2023 ======================================================================= nolistsep § INTRODUCTION §.§ Motivation AI models are currently gaining huge traction in the public and scientific community. The recent release of AI-based art generators like DALL·E 2, Midjourney and Stable Diffusion and text generators like ChatGPT and GPT-4 are opening up the discussion of how they can help architects and planners in their daily business. These tools are already finding widespread adaptation with for example 7 % of all Midjourney queries being related to architecture <cit.>. The new AI art generators show a remarkable capability in generating images of architectural design sketches and frontal or perspective views that display a higher degree of creativity, even if they are technically just recombining the training material. This opens the question if they can be used to more challenging tasks like generating floor plans or sectional drawings which are nowadays often manually drawn on paper or in CAD (Computer Aided Design) tools. The process of planning constructions in civil engineering is still a very manual and time-consuming process. It possesses enormous potential for boosting productivity by 50 % to 60 %, which could result in an extra 1.6 trillion USD being added annually to the value of the construction industry <cit.>. AI approaches are considered a core enable of this productivity boost <cit.>. The recent breakthroughs of the AI art generation tools are based on the introduction of diffusion models in deep learning. They were first introduced by Sohl-Dickstein et al. in <cit.> and successively destroy information in an image during the training phase to learn how to recreate it. We will explain this process in more detail in Section <ref>. The main topic of our paper is how these diffusion models can be used for computational design of architectural drawings like floors plans and how we can improve them to do this better. Within this paper we want to: * Evaluate the capability of diffusion models in computational design * Discuss the requirements of diffusion models for architectural planning * Propose different semantic image-based diffusion model encodings * Compare the performance and workflows of those encodings * Discuss if and how we can directly use BIM data for diffusion models * Derive the requirements and architecture for a BIM-based diffusion model * Identify future directions for diffusion models in computational design §.§ State of the Art §.§.§ Parametric Design The idea of automating design steps with technology is an old one and various approaches developed over the time from parametric design to generative, and algorithmic design. Caetano, Santos, and Leitão combine them in their review under the term Computational Design <cit.>. The idea of Parametric Design is that components are created and shaped according to parameterized functions in a computer, in contrast to being designed directly <cit.>. It is founded in modelling shape attributes trough mathematical functions and was already used before computers were introduced e. g. by Antoni Gaudi in designing elements in the Sagrada Familia <cit.>. Nowadays, the shape attributes are altered by computers which usually generates a large set of candidates that are then for example validated structurally <cit.> or in their energy performance<cit.> to identify good ones to be presented to the user. §.§.§ Generative Design Generative Design is using generative computer approaches like evolutionary or agent-based algorithms to generate and evaluate designs <cit.>. More recent approaches are starting to adopt Graph representations <cit.> and Deep Neural Networks for generative tasks <cit.>. The challenge in those approaches is that they are bound in their generation capability by the underlying generation rules and representation (e. g. genome encoding) and cannot in a creative way recombine them. Further they are prone to generate random results outside of the requested targets due to their probabilistic or non-deterministic nature of their search <cit.>. §.§.§ Algorithmic Design Algorithmic design approaches usually describe computer programs that are used to generate using plans based on rules and logic implemented in code or code grammars <cit.>. More recent approaches combine such a shape grammar with reinforcement learning to generate energy efficient designs <cit.>. These approaches require understanding of the code to control the generation process, which makes them less applicable in practise. §.§.§ Diffusion models Diffusion Models revolutionized the area of generating art with AI. They were first introduced in <cit.> and are inspired by non-equilibrium thermodynamics. Diffusion-models are based on a Deep Neural Network (DNN) architecture that does two things: First, it successively destroys the information in the image it is trained on by adding white noise in a so called forward diffusion step. In each step it trains a DNN to be able to recover the lost information in a reverse diffusion step. The resulting image now contains some of the original information like style and colour and some newly added information which is associated to tags describing the image. As the DNN is only trained to repair a small piece of lost information in the image, the generation approach becomes more stable and solve a main challenge with traditional approaches that try to generate the whole image from scratch and used to fail in the process. This destroy and repair process is then repeated at different levels of resolution. This removes the limitation that the DNN can only repair lost information as we can now create new images by first creating them at a very low resolution and then successively repairing them until we have a high-resolution image. It also allows the DNN for the different resolutions to specialize on different visual aspects. For example, lower resolutions layers learn things like image composition, middle layers learn object shapes, and high resolutions learn object surface materials and colours. This basic idea of diffusion was then used to create neural network architectures that are capable of using reverse diffusion (denoising) to create high-quality image outputs from noisy inputs <cit.>. The concept is nowadays widely applied beyond image generation in natural language processing, video and 3D model generation, pose detection, timeseries modelling, etc. <cit.>. An example of this step-by-step process is shown in Fig. <ref>. Configuring these models to create the desired outputs used to be a process that required expert knowledge. More recent architecture variants like OpenAI's GLIDE model <cit.> contain an encoder, which can take an arbitrary text prompt by a user and create a valid text encoding that can be fed into the connected diffusion model. This architecture also includes a second model, which upsamples the result of the diffusion model. Most current commercial models usually use the CLIP (Contrastive Language Image Pre-training) architecture presented in <cit.> and <cit.>. CLIP is responsible for training the encoder and determining how the text encodings are linked to image parts in the diffusion model. From this point, every platform contains slight differences in model and encoder architecture. The specific encoder used there is called unClip, which includes an image encoder and encodes both text and image inputs into a joint representation space from which the diffusion model can create an image <cit.>. Regarding research on use cases within architecture, Seneviratne et al. <cit.> developed a systematic grammar to utilize DALL·E 1 for generating images in the context of urban planning. They generated a large set of images demonstrating the capability to generate many realistic images, the model exhibited limitations in generating real-world scenes with intricate details. The study in <cit.> analysed a laboratory study with 17 architecture students to understand how they adopted image generation in architectural ideation. Beside the fact that they could create good results, the authors also highlight the many challenges faced from getting unexpected results to finding good prompts. A statement proved by the large study conducted on Midjourney <cit.> showing that users usually require multiple steps to develop good prompts. §.§.§ Fine-tuning Diffusion Models Diffusion models can only generate images in the style of the images and tokens they were originally trained on. Federated diffusion models like DALL·E 2, Midjourney and Stable Diffusion are usually trained on huge datasets from the internet. Specific applications like designing floor plans are not well represented in those training datasets and the models are, thus, also not well specialized in generating those images as we will show in the experimental section <ref>. It is therefore often needed to specialize the model on such a use case to get good results. But, training such a dedicated model for specific use cases is very expensive. For example, Stable Diffusion is trained on 160-million images <cit.> requiring 150.000 hours of training time equivalent to an estimated 11.250 kg CO_2 <cit.>. Fortunately, the new generation of diffusion models allows to refine these federated models with a few examples to a new domain, which is called fine-tuning or few-shot-learning. There are different approaches for doing that. One approach is to retrain the whole diffusion-model on the new dataset and refine the weights in the deep neural network (DNN) <cit.>. This creates a variant of the initial federated diffusion-model that is of the same size as the original network and quite computational expensive to refine all the weights on the DNN. To avoid this, other approaches introduce new layers into the federated diffusion model and modify only their weights. This results in smaller and faster to train models as the original federated model is not modified and fewer weights need to be fine-tuned <cit.>. The additional layers also enable the models to also learn new capabilities, like transforming outline sketches into full images <cit.>. The final approach to fine-tune is not to modify the DNN, but, the input vector from the prompt. The text prompt send to the diffusion model needs to be encoded as number vector to be used as input layer for the DNN. The finetuning changes the weights of this word embedding vector. A process called textual inversion <cit.>. The benefit of this approach is that it is very fast to train and results in very small fine-tuned model that only contains the modified word embedding vector. §.§.§ 3D Diffusion models Generating 3D models from text prompts is currently researched and is using either a voxel representations <cit.>, a point cloud representation <cit.> or combinations <cit.>. However, due to the added complexity of the 3^rd dimensions they currently do perform significantly worse than the 2D approaches. Therefore, we focus our work on 2D diffusion models and will address it in future work. § DIFFUSION MODELS FOR FLOOR PLANS §.§ Integrated workflow One goal of the paper is to understand how generative AI tools can support practitioners in their typical design workflows. For this we want to compare the workflows for the different diffusion models. For this comparison we want first want to discuss a idealized workflows would like that fully integrates and unlocks the potential of generative AI tools in the design process in our opinion. This workflow is illustrated in Fig. <ref>. In this workflow, a planner uses a text interface or speech to text interface to specify the keywords describing his targeted design. As alternative input the user may be able to sketch out a rough plan to provide e. g. the general shape of a more complex planning tasks. The AI system would refine these initial inputs by generating questions to collect additional requirements and clarify uncertainties in a dialog with the user. These requirements may encompass functional (e. g. room types, size, etc.), non-functional (e. g. materials), stylistic (e. g. shape and colours), or computational constraints (e. g. structural dynamics, energy-efficiency). It would then generate multiple design options using a federated diffusion model, validate and rank them and allow the user to select the best ones and edit areas that require more work. The final design should be exported to a BIM model like IFC that the user can easily import in his tools for further use. This workflow is based on the common workflows for diffusion models <cit.>. We consider it also good for civil engineering, because the user has multiple ways to control the design process from specify and refining requirements to editing model parts. Within the next sections we will first discuss qualitatively, what elements of this workflow are already feasible with the existing AI tools and then evaluate their performance in different experiments in section <ref>. From this we derive further research topics needed to realize this idealized workflow in section <ref>. §.§ Out-of-the-box Diffusion Model Many users are adopting the new AI art tools to generate architectural drawings. The study in <cit.> has shown that already 7 % of all prompts on the Midjourney platform are related to architecture with 14.741 prompts in mean per day. Of those prompts 43 % are related to floor plans. Midjourney can be used with the Discord messaging app as it runs on the servers of the provider. This on the one hand makes it easy to use, as no local model needs to be installed and configured. On the other hand, all queries are run on the companies servers and by default are visible to other users. This poses a privacy and data security risk. Stable Diffusion is a alternative generative AI solution that can be installed and operated locally. This overhead brings the benefit to the users to execute their prompts in private and to retrain the model for their specific applications. The out-of-the-box approach for a user testing the technology is to use the out-of-the box model to generate his floor plans. We call this approach our baseline approach . Fig. <ref> show the workflow for this. The user writes a text query describing the design they would like to have and sends it to the generative AI. Out-of-the-box, the generative AI uses a federated Diffusion model and generates one or multiple binary images as response to the query. The workflow of this out-of-the-box approach has multiple differences in comparison to Fig. <ref>. We do not have the refinement loops like that the AI model generates refining questions or the section editing parts of the generated design as they are not fully supported by the current technologies <cit.>. Further, the out-of-the-box approach creates only a binary image of a floor plan instead of directly generating a BIM model. In theory, this binary image could be used to automatically reconstruct a BIM model. However, this is a non-trivial problem still in research <cit.>. In practise, the user needs to manually create the BIM model from on the generated image. §.§ Fine-tuned Diffusion Model The out-of-the-box approach is prone to creating faulty results, as we will confirm in the experimental section <ref>. Fig. <ref> shows an example floor plan generated by Stable Diffusion for the query building floor plan of a one family house with a garden. The image shows some resemblance to a floor plan, but also many incorrect elements. The reason for the low-quality result is that the out-of-the-box model is trained on a wide variety of images and will respond with images with a large variance and not the expected narrow focus on floor plans. Therefore, it is necessary to fine-tune the diffusion model to correctly understand what kind of floor plans we expect. This can be done by retraining the model with new sample images and queries that resemble the expected results as discussed in section <ref>. This modifies our workflow as illustrated in Fig. <ref>. We now have a training and a generation phase. In the training phase we fine-tune an existing federated diffusion model with a small set of sample images and queries. We use the tuned diffusion model in the generation phase to generate then designs that are closer to the training data. §.§ Fine-tuned Diffusion Models with Semantic Encoded rooms and elements In previous work <cit.> it was shown that one of the main reasons why diffusion models struggle with floor plans is the missing semantic understanding of the elements in the image. While a diffusion model may learn that the term "floor plan" is correlated to rectangular lines, it does not really understand that these lines are walls enclosing rooms. The moment we add more symbols to the image like doors, windows, furniture or assets, the model will struggle understanding the meaning of these symbols and mix them up as visible in Fig. <ref>. The question is: Can make it easier for the diffusion model to learn these semantics? We propose a novel approach to encode these semantics in floor plans to improve the fine-tuning of diffusion models utilizing the following three strategies: Simplify: The more different semantics a floor plan contains the harder it is for the diffusion model to differentiate. Thus, it is important to remove unnecessary elements (furniture, assets, etc.) and focus on the relevant semantics for floor planning (walls, doors, and windows). The removed unnecessary elements may be added in a following in-painting step in the section editing step from Fig. <ref>. This will be our baseline for the retraining of model , which will be called (Reduced). Encode: When we look at a floor plan, we know that thick lines are walls and the enclosed white space is a room. We also understand its semantic function (kitchen, bed room, living room) from the symbols of appliances that it contains (oven/fridge, bed, couch). When we remove those symbols—to simplify the model—the diffusion model (and we) cannot understand this semantic. So, we need a way to re-encode this information, for example by filling the rooms with different colours like blue for kitchen, orange for a bed room, or green for a living room. We call this semantic colouring of rooms . Contrast: Floor plans are often black and white (BW) and diffusion models are designed to encode colour images. Thus, we lose lots of information capacity by just training them on black and white images. We already used colours to encode room semantics in the last step. The question is if we can further simplify the floor plans by replacing the symbols for doors and windows with colours instead. We call this semantic colouring of elements . The combined semantic encoding of room and elements is the model. §.§ Fine-tuning Methodology In order to fine-tune Stable Diffusion to the different semantic encodings, we implemented an algorithmic design approach in Python that generates a desired number of floor plans and then encodes it in each of the four different proposed styles , , , and as well as a description prompt for each image. This ensures that all four models are on even ground during the training phase. Fig. <ref> shows example images for each of the proposed encodings. The generated prompts describe the generated floor plan and follows the common guidelines for Stable Diffusion prompts, specifically for automatic1111: Individual concepts are separated by commas and round brackets. Commas act as conceptual separators and brackets assign prompt weights to each of the concepts (always kept at 1 here). There is also a one token long style descriptor that needs to be included at the beginning to invoke the trained style. Each concept descriptor contains a number, a quantity descriptor (few or many), the word associated with the element itself, and a colour. The training data was created to cover various possible floor plan designs from different building shapes up to buildings that have special arrangements like having no windows or a specific number of doors. The individual symbologies (like doors or living rooms) are also trained through images that only feature these symbols in several orientations. The following training images were generated for each style: * Two images each of individual doors, windows, three images each of living rooms, bathrooms and kitchens * Ten floor plans each in which an element has an altered colour specification (green) * Ten floor plans each for different building shapes: l-shaped, o-shaped, c-shaped, square, rectangle, multiple buildings * Ten floor plans each for negations, i. e. elements whose count is 0 and quantity descriptors is no * Ten floor plans each for the few and many cases of each kind of element * Ten floor plans each in which the count of an element is set to 2, 4 or 6 We fine-tuned a Stable Diffusion model v2.1 for each encoding , , , and using Textual Inversions <cit.> as preliminary trials showed that it offers fastest training time at equal result performance for our use case in comparison to <cit.> and <cit.>. § EXPERIMENTS §.§ Validation Methodology To compare the performance of the different un-tuned and fined-tuned diffusion models we defined a large set of 43 test prompts that evaluate different features representing practical request as well as challenging situations for the diffusion models. The prompts classify into the following categories: Valid Plan: First, we evaluate how many of the generated images show a valid floor plan that is not cut off and contains at least some recognizable symbology. Overfit Check: Second, we check if the fine-tuned models do not overfit on the training samples, by prompting for human faces expecting that we do not get floor plans. Quantify Objects/Rooms: To see if the model can understand quantities we query for few (less than 7) and many (more than 6) windows/doors and rooms/kittchen/bathrooms. Count of Objects/Rooms: Instead of asking for fuzzy quantities, we ask for specific counts (2, 4, 6) of windows/doors and rooms/kittchen/bathrooms. Remove Objects/Rooms: To check if the model has semantic understanding of core terms, we ask it to remove them and e. g. create a building with no windows/doors or no kitchens, etc. (Note: this is not done through negative prompts, as we want to test the ability of the models to turn structured queries into semantically sound plans.) Recolour Objects/Rooms: To see if the model has semantic understanding of the colour and term encoding that we use, we ask special queries to change the colour of doors/windows and rooms like kitchen, etc. Arrange Rooms: To see if the model has spatial understanding, we request specific shapes of the room arrangement like L-, C-, or O-shapes, rectangular shapes, and multiple separated buildings. For each test prompt and diffusion model in , , , , we generate 10 sample images. In order to further improve results of all generations, we also added the following negative prompt to each iteration: "gradients, blurry, fuzzy borders, reflections, lighting". For each sample image we evaluated manually whether or not (1 or 0) the feature requested in the prompt was addressed in the generated image. This process resulted in 2.150 samples images that we evaluated. §.§ Discussion Table <ref> shows the results of our evaluation. The baseline out-of-the-box diffusion model without fine tuning performs worst. Only 6 % of all generated floor plans were fully valid, while most floor plans were faulty in some aspect like that rooms are missing walls, symbols are partially un-recognizable or room types are not clear (see Fig. <ref>). Ignoring these flaws, the model generates out-of-the-box something of a floor plan with some symbols for windows and doors and rooms recognizable. This indicates that Stable Diffusion knows the style of floor plans, but is not able to recreate fully. Based on this out-of-the-box experience, one may get the impression that Stable Diffusion is unable to generate useful floor plans. Using fine-tuning changes that drastically as visible in the valid plan category. The tuned models mostly manage to display a full building instead of partial cut-outs. There is also recurring symbology that avoids visual artifacts and errors that render a plan difficult to interpret or even entirely unreadable. The model with the best performance here is . It is the most visually simple model, without room colouring or complex door and window symbols. Demonstrating that the Simplify strategy helps generating valid models. The overfitting test results show that the fine-tuned models are not overfitted. Even the base-line models do not always process the overfitting query correctly and all tuned models are in the same margin of error. Model with semantic element encoding was also the model that performing best to the request to remove specific elements. Most other models perform bad in this category. This shows that here our semantic Encoding strategy helps the diffusion model to learn what windows and doors are. That this is not strictly associated to the colour is evidenced with the results for the recolouring prompts, which usually failed. Similarly, the results of the count prompts show that the models simply are not yet capable of counting, even after fine-tuning models with many examples with correct counts. The results for quantitative prompts are significantly better. This may partially be because we only have two categories (few/many) resulting in a 50 % chance that the models get it right. This happens for example for the baseline model , which usually creates incomplete cut-off plans and, in consequential error, gets all few windows/doors/rooms queries correct. Yet, again the semantic encoding model performs above the random draw chance. The prompts testing room arrangements like L-, C- or O-shapes work well enough across all models at least in comparison to many other queries. This shows that even the baseline model has some basic understanding of arrangements in the higher diffusion model layers. This still is improved significantly with fine-tuning. Here the models and show the highest performance increase. This is likely because the walls and rooms are very uniform in colour, increasing the contrast between inside and outside, which is harder to differentiate for the multicoloured rooms. Which supports our Contrasting strategy. § RESULTS AND FUTURE DIRECTIONS §.§ Results The results from the experiments show that fine-tuning diffusion models significantly improves validity of the generated floor plans from 6 % of the un-tuned model to 57 %–90 % of the fine-tuned models. Tuning the models directly with semantic element encoding is resulting in the best performance across all test prompts. This is proving the correctness of our three strategies Simplify, Encode and Contrast as this model has the best trade-off between them. It is simple in containing only lines. By semantically encoding doors and windows with colours we keep them in the model without reducing simplicity and creating images with high contrast with only 4 colours. Adding room colours is reducing this contrast, which is why the and models are performing worse. Our results also show that diffusion models still have major limitations in generating floor plans to satisfy queried requirements. The best performing queries for quantities do not exceed a 68 % success rate. Queries around counts and recolouring usually fail completely. Examples of common errors are shown in Fig. <ref>. They can be summarized as: * Results show only partial floor plans * Requested elements are not contained in the generated result * Symbols and colour palettes are generated in incorrect context * Contextual layout instructions are ignored (e. g. kitchen left of living room) * Requested counts of elements and rooms are only met by chance These issues lead us to some future research directions that we identified. §.§ Future Directions §.§.§ Future Directions for image-based Diffusion Models Our experiments revealed several limitations of diffusion models in generating floor plans leading to several future research directions. The success rates of prompts in our experiments are far from optimal. This is one of the reasons why diffusion models usually generate multiple samples at once and let the user to choose. They do than within seconds, significantly outperforming evolutionary generative design approaches. The analysis of Midjourney queries in <cit.> showed that only 13.1 % of all prompts are upscaled and then refined over 6 iterations—not minimizing the popularity of the platform. Here good workflows that allow to quickly select and refine queries for computational design are important to enable users to quickly filter out bad samples as shown in our idealized workflow in Fig. <ref>. Furthermore, the huge body of ongoing research in diffusion models will lead to improved models soon. An important topic in this fundamental area is their capability to correctly count elements in the generated images <cit.>. But, this still requires that the diffusion model has a good understanding of the semantics of the elements in a floor plan. If it does not know what symbol represents a door it will not be able to count it correctly. This lack of explicit semantic understanding is main limitation of diffusion models. Our experiments showed that it is possible to tune diffusion models to improve the results, but it is not currently feasible to remove all limitations and requires further research in semantic diffusion model encodings. Related to that is the question of how we can decode the semantic encodings of the generated images. Approaches within the area of diffusion models are restoration models, that for example allow to repair faces in generated images <cit.>. In relation to that we have the open research question of transforming generated binary images to Building Information Models that we initially wanted as results of the workflow in Fig. <ref>. The benefit of our semantic encoding is that our strategies of Simplify, Encode, and Contrast also create cleaner images for BIM restoration approaches like <cit.>. §.§.§ Future Directions for BIM-based Diffusion Models All these future research topics above address the limitation of image-based diffusion models in computational design. What we actually want are BIM models. This leaves us with the final essential question: Why not develop a diffusion model for BIM models and simplify the whole process? From the introduction section <ref> we can derive some requirements for such a BIM-based diffusion model: * The model needs to be multi-scale. Traditionally diffusion models work on different resolutions to repair an image successively. These layers then store different information of the original image. * The model needs to be diffusable. We need to be able to ingest noise into the model on each level to be able to train a repair network. * The model needs to be semantically labelled. We need to be able to label objects and properties as additional input to the diffusion model. BIM models easily fulfil the last requirement. Given their nature, they are well structured and semantically rich models. Unfortunately, the first two requirements are not easily applicable to BIM models. We can of course raster a BIM model in different resolutions to generate binary images. But, our results show that we lose the semantic information and the well-structured nature in the process and that it is hard to recreate that. Our proposal is to research in BIM-based Diffusion Models that keep that semantic information and utilizes the hierarchical nature of BIM models. In a BIM model we have floors, zones, and objects that form a hierarchy. Now the only requirement we have to solve is the diffusability. For that we need a Neural Network representation of the BIM model that we can easily alter. Here we can utilize the fact that at the core of BIM models like IFC lies a graph representation like STEP. Representing BIM models in Graphs was proposed before for generative approaches like <cit.>. But, we propose to research diffusion models instead of generative models for generation, as they can be trained with additional tags to generate specific designs. The closest current neural network representation for this are Graph Neural Networks (GNN) recently developed for e. g. time series forecasting <cit.>. We propose the research into multi-layer GNN model that represents on each layer the respective information of a BIM model, like which floors exists and which are close to each other, which zones exists within these floors and how they connect via walls. Diffusion approaches for graphs capable of weakening or removing those connections in the graph were recently developed <cit.>. The benefit of the approach is that we can directly generate 3D models utilizing the BIM structure. A challenge here are the data requirements for training. As discussed in the beginning, training a diffusion model usually requires huge amounts of data. Here, it may be one option to use algorithmic design approaches to generate training data like we did for the fine-tuning. Alternatively, we as research community may need to start an open-source initiative to collect openBIM datasets at large scale to enable this next level of BIM-based Diffusion Models. § CONCLUSION In this paper we evaluate the use of diffusion models for computational design at the example of floor plans. We analyse the current state of the art in the field and explain diffusion model approaches. We identify the main limitations and develop a novel approach to improve fine-tuning of diffusion models trough semantic encoding. We analyse the workflows for using the diffusion models qualitatively and evaluate the performance of different diffusion models encodings quantitatively in a large set of experiments. Our experiments show that current diffusion models already create floor plans and that results can be improved significantly with fine-tuning. Particularly our proposed semantic encoding improved validity of the generated floor plans to 90 %. However, our experiments also unveiled several shortcomings in the current diffusion models of which many track back to the lack of semantic understanding. Therefore, we identified several future directions for research in this new field of computation design that may in future automate many detailed planning steps. It is unclear yet, if we can get there with image-based diffusion models or if we need to start working on BIM-based Diffusion Models to unlock their semantic potential. § DECLARATIONS Data Availability Statement The code for generating the training samples as as well as all validation queries will be open sourced on acceptance of the paper. Funding No funding was received for conducting this study. Competing interests The authors have no financial or proprietary interests in any material discussed in this article. Author contributions JP: Ideation and manuscript, Writing Sec. 1, 2.1–2.4, 4, 5. MB: Use Case Experiments, Writing Sec. 2.5, 3. Both authors reviewed and edited all sections as well as read and approved the final manuscript. plain
http://arxiv.org/abs/2307.02341v1
20230705145535
Necessary and sufficient symmetries in Event-Chain Monte Carlo with generalized flows and Application to hard dimers
[ "Tristan Guyon", "Arnaud Guillin", "Manon Michel" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
[email protected] [email protected] [email protected] Laboratoire de Mathématiques Blaise Pascal UMR 6620, CNRS, Université Clermont-Auvergne, Aubière, France. Event-Chain Monte Carlo methods generate continuous-time and non-reversible Markov processes which often display important accelerations compared to their reversible counterparts. However their generalization to any system may appear less straightforward. In this work, we build on the recent analytical characterization of such methods as generating Piecewise Deterministic Markov Processes (PDMP) to clearly decipher the necessary symmetries the PDMP must obey from the sufficient ones which may prove to be too restrictive in a general setting. Thus, we derive a necessary rotational invariance of the probability flows and the minimum event rate, which identifies with the corresponding infinitesimal rejection rate. Such conditions always yield a correct ECMC scheme. We then generalize such results to the case of more general deterministic flows than the translational ones. In particular, we define two classes of interest of general flows, the ideal and uniform-ideal ones, which respectively suppresses or reduces the event rates. From there, we implement a complete non-reversible sampling of a systems of hard dimers, thanks to the introduction of rotational flows, which are uniform-ideal and shows a speed-up of up to ∼ 3 compared to the state-of-the-art ECMC/Metropolis hybrid scheme. Necessary and sufficient symmetries in Event-Chain Monte Carlo with generalized flows and Application to hard dimers Manon Michel ======================================================================================================================== § INTRODUCTION Since the introduction of the seminal Metropolis algorithm <cit.> first applied to hard-sphere systems, Markov-chain Monte Carlo methods have become an ubiquitous tool in computational physics <cit.>. They produce numerical evaluation of high-dimensional integrals, often arising from a stochastic description linked to a Boltzmann probability distribution, by a discrete sum over random configuration samples. Such samples are outputted along a Markov process which explores the system configuration space while leaving the target Boltzmann distribution invariant. The quality of such numerical approximation is directly linked to the capacity of the Markov process to produce a sequence of samples as uncorrelated as possible <cit.>. The Metropolis algorithm generates reversible Markov processes, which rely on rejections to target the correct invariant distribution. It then typically displays a diffusive dynamics, most often leading to expensive correlation times <cit.>. This situation can be further worsened in presence of critical slowing down phenomena at phase transitions <cit.>, where the correlation length diverges and spans the full system. Therefore, in order to alleviate finite-size effects, a large community effort is devoted to the developments of MCMC methods which exhibit better performance and scalability with system sizes. While the efficient clusters methods <cit.> have been derived for lattice spin systems thanks to the spin-flip symmetry they present, the lack of a natural involution in continuous particle systems has motivated the development of non-reversible MCMC methods, known as Event-Chain Monte Carlo (ECMC) <cit.>. ECMC generates persistent moves, where singles particles are sequentially updated along ballistic trajectories forming up a chain whose sequence is controlled by some jump process, the events, reacting to the energy changes along the deterministic flow by resampling its parameters by some Markov kernel. The scheme then is rejection-free and relies on a control by the events of the ballistic exploration to ensure the correct invariant distribution. The observed accelerations have motivated the applications of such methods to a large variety of systems, as polymers <cit.> or continuous spins <cit.>. It led to further generalization, mainly revolving around the exploitation of more global symmetries at the events, in order to maintain the necessary control on the ballistic exploration, while avoiding backtracking as much as possible. Thus, starting with the initial pairwise symmetry <cit.>, events can now exploit translational <cit.> and rotational <cit.> symmetry or even the addition of some artificial kinetic energy <cit.>. While improving the control of the ballistic flow at the events has been under much focus, the development of different ballistic flows other than the standard translational updates has received smaller attention, see for example <cit.>. Indeed, more involved flow schemes may lead to an increased difficulty in computing the event time in complex systems. However, the potential accelerations could counterbalance this effect. More importantly, any ECMC application to anisotropic particles, as dimers or more generally molecules, requires the introduction of sequences of rotations, in order to thermalize all degrees of freedom. Because of the lack of rotational-flow schemes, ECMC has thus up to now only been applied to tethered-type <cit.> or elastic <cit.> interactions in anisotropic particles, so that translations can still thermalize the systems, or has been coupled to some Metropolis scheme to generate rotational moves <cit.>. In addition to simulating all anisotropic models, devising non-reversible general flows may further lead to accelerations, as <cit.> observes that the more anisotropic the particles in a system are, the more important the observed accelerations is when introducing non reversibility. Finally, the design of generalized flows more generally raises the question around the understanding of the necessary fundamental conditions in ECMC versus the convenient sufficient choices, which is reminiscent of the global balance vs detailed balance question. In this work, we directly address this question, which allows us to explicitly generalize the ECMC methods to more general ballistic flows. To do so, we build on the characterization of ECMC as Piecewise deterministic Markov processes (PDMP) <cit.> to decipher precisely the necessary conditions from the sufficient choices. We do so first for the standard case of translational flows and we establish the necessary rotational invariance of the probability flows in the general case of local events and the fundamental minimum event rate, which comes down to the corresponding infinitesimal Metropolis rejection rate. This reveals that one of the state-of-the-art ECMC schemes, known as the Forward variant or more precisely the direct event kernel <cit.>, actually does not require any further restrictive conditions to be implemented, as was previously assumed. We then consider the case of general flows, where we generalize the necessary conditions of rotational invariance and minimum event rate. It then leads us to define ideal and uniform-ideal flows, which respectively suppress the minimum event rate in general or only for uniform stationary distributions. We then find that the Markov kernels used in the translational case can be directly adapted to any generalized flow. While ideal flows may be out of reach in terms of algorithmic implementations, uniform-ideal flows can however be more easily implemented, as they include for instance the translational or rotational flows. We then design and numerically benchmark such rotational flows in both isotropic and anisotropic systems, i.e. bidimensional systems of hard spheres, fundamental MCMC testbeds, and of hard dimers. In hard-disk systems, it appears that rotational flows retain good irreducibility property independently of the refreshment rate. In hard-dimer systems, the symmetry of the dimers itself interestingly imposes strong conditions on the possible deterministic flows. Introducing completely non-reversible rotations leads at the considered system size of 32 to a speed-up factor of up to 3 in the densest considered density (ρ=0.7) compared to the state-of-the-art ECMC scheme <cit.>. We start by introducing in Section <ref> the sampling of particle systems by a standard Metropolis algorithm. We then study in depth in Section <ref> the ECMC method with standard translational flow, and such directly in a PDMP framework which allows to derive the necessary requirements as rotational invariance of the probability flows or the minimum event rate. We also illustrate the standard translational ECMC on isotropic particle systems. We then present in Section <ref> how to generalize such study to more general deterministic flows and discuss the design of rotational flows for hard disks and for anisotropic bidimensional dimers. Finally, we study the impact of the flow nature on the numerical performances in a system of bidimensional hard spheres and dimers in Section <ref>. § METROPOLIS SAMPLING FOR PARTICLE SYSTEMS We consider a system composed of N isotropic particles in a d-dimensional periodic box of length L. A system configuration x∈𝒮=(ℝ/(Lℤ))^dN is entirely described by all the particle positions x_i, i.e. x=(x_i)_1≤ i ≤ N. We introduce the notation x_∖ i=((x_j)_j<i,(x_j)_j>i for the configuration of all particle positions but the i-th one. The particles all interact in a repulsive pairwise manner following the potential, U(x) = ∑_i<j u(r(x_i,x_j)), with r the periodic distance on the torus. Furthermore, the system may only admit a subset Ω⊂𝒮 of valid configurations, as is the case when dealing with particles with hard cores of diameter σ that cannot overlap. Noting β the inverse temperature, the system steady state then follows the Boltzmann measure, π(x) ∝_Ω(x)exp(-β U(x)), which we extend by continuity on ∂Ω. In the following, we will set β=1 without loss of generality. A typical Metropolis algorithm sampling from π consists in a proposal distribution q, verifying ∫_𝒮q(x'|x)x' = 1 and commonly chosen to be a uniform increment of the position x_i of a random single sphere i in x, i.e. q(x'|x) = 1/Nh^d∑_i=1^N _x_∖ i(x'_∖ i)_C_1(x_i-x_i'/h) with h∈]0,1] some step amplitude and C_1 the centered unit hypercube of R^d and, in an acceptance rate, a(x'|x)=min(1,π(x')/π(x)). The Markov chain generated by this Metropolis algorithm then follows the following kernel, K(x,x')= q(x'|x)a(x'|x) +(1-∫_y ∈𝒮q(y|x)a(y|x)y)_{x}(x'), which leaves π invariant, as it satisfies the detailed balance, π(x')K(x',x)=π(x)K(x,x'). It is however the global balance, ∫_x' ∈𝒮π(x')K(x',x)=∫_x' ∈𝒮π(x)K(x,x'), which is the necessary condition for the invariance of π. Different methods have then been developed in order to gain efficiency by generating non-reversible processes which only obeys (<ref>), in particular the Event-Chain Monte Carlo (ECMC) method <cit.>. However, the generalization of such methods to any systems, for instance to the sampling of anisotropic particles requiring rotations, is not straightforward, whereas the Metropolis algorithm can be adapted quite easily by considering the correct potential U and by proposing steps that ensure irreducibility, as adding rotations for anisotropic particles. Aiming at devising such flow generalization for ECMC, we first present in the next section the standard ECMC method based on translations through their analytical characterization <cit.>, which makes possible to precisely separate the necessary symmetries from the imposed sufficient ones, similarly to the difference between global and detailed balances. From there, we discuss how to introduce generalized flows and especially explain how to generate rotations. § EVENT-CHAIN MONTE CARLO WITH TRANSLATIONAL FLOW Contrary to the reversible and discrete-time Metropolis scheme, Event-Chain Monte Carlo (ECMC) <cit.> generates a continuous-time and non-reversible Markov process. Considered as the infinitesimal limit of a discrete-time scheme, it then breaks detailed balance while still satisfying the global one. To do so, a lifting variable v∈𝒱 following some chosen distribution μ is introduced so that the state space 𝒮 is extended to 𝒮×𝒱 and the extended state (x,v) now follows the product measure π⊗μ. The generated non-reversible process is then composed of ballistic updates of x set by v, which is updated by a Markov kernel at domain boundaries (e.g. when two hard cores collides in a particle system) or at the events of a Poisson process whose rate depends on the infinitesimal potential U increment. Thus, in particle systems, the generated process commonly comes down to updating a sphere position along some direction, until an event stemming from a pairwise interaction or hardcore collision with another sphere arises and this latter sphere is then the one being updated, in a billiard-like fashion as can be seen on Fig. <ref>. Then, on the extended state space 𝒮×𝒱, the former rejections in a Metropolis-scheme setting are now transformed into an update of v, making the scheme rejection-free. ECMC was initially built and justified through the infinitesimal limit of a discrete-time lifted Markov chain <cit.>. The process (x_t,v_t) however identifies with a Piecewise deterministic Markov process <cit.>, which allows for a direct and efficient formalization directly at the continuous-space and -time level through its infinitesimal generator and boundary conditions. In particular, the invariance condition is straightforward. Following the PDMP characterization of ECMC detailed in <cit.>, we first explicit in terms of PDMP the ECMC process in a general setting. We then precisely characterize the necessary symmetries stemming from the invariance and the more restrictive but explicit possible sufficient symmetries. We finally illustrate it on the example of particle systems. §.§ PDMP characterization Generator and boundary kernel. First, the generated process is characterized in the bulk, i.e. for (x,v)∈Ω×𝒱, by its generator 𝒜, coding for the infinitesimal changes through the process on any observable f (i.e. 𝒜f=lim_t→ 01tE_x,v[f(x(t),v(t))-f(x,v)]). In a general setting, the generator of the ECMC process, writes itself, 𝒜= ⟨ϕ((x,v)) , ∇⟩_Transport + λ(x,v)_rate(Q((x,v), ·)_jump-Id)_Events, where the deterministic flow ϕ is such that (ẋ(t),v̇(t)) = ϕ(x(t),v(t))=(ϕ_x(x(t),v(t)),ϕ_v(x(t),v(t))) during the ballistic phase. The rate λ(x,v) rules the event times, at which the flow is controlled through the update of lifting variable v by the Markov kernel Q to another state v'∈𝒱. First, we consider the common case where the flow ϕ(x,v)=ϕ^T(x,v)=(ϕ_x(v),0) identifies with a homogeneous translation of x uniquely set by a fixed v. We discuss the impact of more general transports on the invariance conditions in the next section. In presence of hard validity constraints on Ω, as hardcore interactions, the ECMC characterization is complete once added a boundary Markov kernel Q_b, which updates a state (x,v)∈∂Ω×𝒱 that is exiting the bulk (⟨ n(x),ϕ_x(v)⟩ > 0, n(x) being the local normal on the boundary, such states form up the set Γ^+) to an entering one (⟨ n(x),ϕ_x(v')⟩ < 0, forming up the set Γ^-). Refreshment and irreducibility. Irreducibility may require the introduction of a refreshment mechanism, which updates the lifting variable v according to its invariant distribution. This refreshment can be introduced at an exponential homogeneous rate, then appearing directly in the generator, or as a variety of different schemes, including the commonly used fixed-time refreshment, as allowed by a formalization of the refreshment as a boundary effect <cit.>. As the refreshment term self-cancels in the invariance computation, we omit it in the following. Outputting samples. As the generated process is continuous-time, observables of interest must be integrated along the full trajectory (x_t,v_t) or averaged over an unbiased collection of samples (x_τ_n,v_τ,n)_τ_n from the full trajectory. As the refreshment process, τ_n can follow an exponential law or identify with some fixed value. In particular, if the vectors ϕ_x(v) are of same norm, it is equivalent to output samples after a fixed length has been traveled over, which is the method used in most works. If ϕ_x(v) can vary in norm, one should be careful to output samples at a fixed time or at a fixed length but renormalized by the norm of ϕ_x(v). §.§ Necessary and sufficient symmetries for invariance Invariance. The PDMP framework allows for a direct derivation of the invariance of the target π distribution, ∫_Ω×𝒱𝒜f(x,v)π̣(x)μ̣(v)=0, for test functions f satisfying the boundary condition for (x,v')∈Γ^-, ∫_𝒱_Γ^+((x,v))Q_b((x,v), v')f(x,v)μ̣(v)=f(x,v'). An integration by parts on the transport term shows the necessary compensation between the transport term and the event one, giving in the fixed translation case, (⟨ϕ_x(v), ∇lnπ(x)⟩ + λ(x,v))μ(v) = ∫_𝒱λ(x,v')Q((x,v'), v)μ̣(v'), and the transport cancellation by the boundary kernel at the boundary, for (x,v)∈∂Ω×𝒱, ⟨ϕ_x(v), n(x)⟩_- μ(v) =∫_𝒱Q_b((x,v'), v)⟨ϕ_x(v'), n(x)⟩_+μ̣(v'). Similar to the global balance for discrete-time Markov chains, both conditions (<ref>) and (<ref>) are the only necessary and sufficient ones to ensure the correct invariance. As the firstly derived discrete-time MCMC schemes were detailed-balanced and obeyed stricter sufficient conditions, the invariance conditions in actual MCMC schemes based on PDMP actually satisfy more restrictive conditions. We now more precisely derive the necessary conditions stemming from (<ref>) and (<ref>). Necessary symmetries. First, a direct necessary condition on the choice of μ and ϕ is the one of the conservation of the probability flows, obtained by an integration over v in (<ref>) or (<ref>), 1.4{[ ∫_𝒱⟨ϕ_x(v), ∇lnπ(x)⟩μ̣(v) = 0; ∫_𝒱⟨ϕ_x(v), n(x)⟩μ̣(v) = 0 ]., i.e., 1.4{[ ∫_𝒱⟨ϕ_x(v), ∇lnπ(x)⟩_+ μ̣(v); =∫_𝒱⟨ϕ_x(v), ∇lnπ(x)⟩_- μ̣(v); ∫_𝒱⟨ϕ_x(v), n(x)⟩_+ μ̣(v); =∫_𝒱⟨ϕ_x(v), n(x)⟩_- μ̣(v) ]., Thus, the distribution and the flow set by the lifting variable v must ensure that there is a balance between probability flows increasing or decreasing along -∇lnπ, i.e. the gradient of potential U, or along n(x) on the boundary. This necessary flow symmetry (<ref>) underlines that, when the time reversibility is broken, it is replaced by another key symmetry ensured by ϕ and μ and whose origin is deeply rooted in the conservation of probability flow itself. Furthermore, such symmetry plays a central role in the transport-event compensation. First, as the right-hand term of (<ref>) is positive, the rate λ must obey the following condition for all (x,v)∈Ω×𝒱, λ(x,v) + ⟨ϕ_x(v), ∇lnπ(x)⟩≥ 0, so that the choice realizing the smallest event rate possible is, λ_M(x,v) = ⟨ϕ_x(v), -∇lnπ(x) ⟩_+. It corresponds to the rejection rate of the equivalent infinitesimal lifted Metropolis scheme. When constructing ECMC as such infinitesimal limit, this choice seems natural <cit.> but (<ref>) further shows that it is indeed the minimal possible rate. Considering now the conditions on the event Markov kernel Q, we obtain by setting λ to λ_M in (<ref>), ⟨ϕ_x(v), -∇lnπ(x)⟩_-μ(v) = ∫_𝒱v' μ(v')⟨ϕ_x(v'), -∇lnπ(x)⟩_+Q((x,v'), v). Thus, the kernel Q can be set to any kernel which leave invariant, up to a flip, the distribution ⟨ϕ_x(v), -∇lnπ(x)⟩_+μ(v). A valid explicit choice then is the direct sampling, Q_Dir((x,v'), v) = ⟨ϕ_x(v), -∇lnπ(x)⟩_-μ(v)/∫⟨ϕ_x(v), -∇lnπ(x)⟩_+μ(v), which correctly sums up to 1 thanks to the necessary flow symmetry (<ref>) and does not depend on the norm of the local gradient ∇lnπ(x). Such kernel can be understood as a direct pick from the event distribution (⟨ϕ_x(v), -∇lnπ(x)⟩_+μ(v)) combined with some flip. Such derivation is similar regarding the boundary condition (<ref>) and Q^b can be chosen as Q once updated the reference vector from -∇lnπ(x) to n(x,v). This type of kernel was already introduced in the Forward event-chain generalization <cit.> by explicitly assuming that ϕ_x(v)μ(v) is rotationally-invariant in order to derive (<ref>). As the symmetry of probability flows around the gradient (<ref>) actually is as necessary as the conservation condition itself, we show here that such direct sampling is always possible without further assumption. However, the advantage of imposing a rotationally-invariant property lies in an easier and general implementation of the kernel (<ref>), as the dependence on x only impacts the decomposition of v into a parallel component along ∇lnπ(x), and an orthogonal one, whose values can be resampled independently from x. More generally, restricting μ to some choice allowing to easily define the flip mentioned earlier allows some explicit construction we now address. Sufficient symmetries. The need to derive explicit and general choices for λ, Q and Q_b has indeed led to the development of schemes satisfying stricter sufficient conditions but allowing easier numerical implementation, as the choice of the detailed balance has historically allowed for discrete-time Markov chains. Actually, the choice of imposing a product measure π⊗μ is already a stricter choice than the necessary condition that π is the marginal of the extended measure, but it definitely simplifies the task of designing a correct scheme. We now discuss the most commonly used schemes. First, we more generally decompose λ as, λ(x,v) = α(x,v) + ⟨ϕ_x(v), -∇lnπ(x) ⟩_+, with α(x,v) ≥ 0 some excess rate. The condition (<ref>) is now decomposed as, ⟨ϕ_x(v), -∇lnπ(x)⟩_-μ(v) = ∫_𝒱v' μ(v')⟨ϕ_x(v'), -∇lnπ(x)⟩_+Q_ϕ((x,v'), v), and, α(x,v)μ(v)=∫_𝒱v ' μ(v')α(x,v')Q_α((x,v'), v) with the Markov kernels Q_ϕ and Q_α so that, Q((x,v'), v) =⟨ϕ_x(v'), -∇lnπ(x)⟩_+/λ(x,v')Q_ϕ((x,v'), v) +α(x,v')/λ(x,v')Q_α((x,v'), v). The kernel Q_ϕ can be set to any kernel obeying (<ref>), as Q_Dir (<ref>). The Markov kernel Q_α, stemming from the unnecessary excess rate α, can be any kernel which leaves α(x,v)μ(v) invariant, as a direct or Metropolis-like sampling following α(x,v)μ(v). Common choices for α are motivated by an easier computation of the event times, e.g. by the thinning of the actual minimal rate, Q_α((x,v),v') then identifying with δ(v-v')v' (no real event is actually sampled), or by the factorization of the minimal rate along the components of the gradient -∇lnπ, e.g. along interaction terms, which may allow for some inversion sampling of the event times. Interestingly, in the case of factorized rates, writing, α(x,v) =∑_i⟨ f_i(x,v)⟩_+-⟨∑_i f_i(x,v)⟩_+ =∑_i⟨ f_i(x,v)⟩_--⟨∑_i f_i(x,v)⟩_-, with {f_i} the factors so that ∑_i f_i(x,v)=⟨ϕ_x(v'), -∇lnπ(x)⟩, we obtain by setting Q_ϕ and Q_α to the same kernel Q_Fact, as usually done, the condition, ∑_i⟨ f_i(x,v)⟩_-μ(v) = ∫_𝒱v' μ(v') ∑_i⟨ f_i(x,v')⟩_+ Q_Fact((x,v'), v), which necessarily requires the factorized symmetry for flow conservation, ∑_i ∫_𝒱μ(v)⟨ f_i(x,v)⟩_-=∑_i ∫_𝒱μ(v) ⟨ f_i(x,v)⟩_+, which comes down to (<ref>), so that it is always possible to set Q_Fact to the direct variant Q^Glob_Fact,Dir((x,v'), v) = ∑_i⟨ f_i(x,v)⟩_-μ(v)/∫∑_i⟨ f_i(x,v)⟩_+μ(v). However, it may prove interesting to consider ϕ and μ so that the condition (<ref>) is achieved in a more restrictive manner thanks to some detailed symmetry, as, ∀ i, ∫_𝒱μ(v)⟨ f_i(x,v)⟩_-=∫_𝒱μ(v) ⟨ f_i(x,v)⟩_+. This symmetry allows to treat every factor independently and to consider the following kernel, Q^Det_Fact,Dir((x,v'), v) =∑_i⟨ f_i(x,v)⟩_+μ(v)Q_i/∑_i∫⟨ f_i(x,v)⟩_+μ(v) with, Q_i((x,v'), v) = ⟨ f_i(x,v)⟩_-μ(v)/∫⟨ f_i(x,v)⟩_+μ(v). Drawing on the lines of more restrictive and detailed symmetry for the flow conservation as in (<ref>) and of a particular role played by some flip, most explicit schemes, apart the direct kernel, are building on some flip mapping F_x:𝒱→𝒱 present in flow ϕ and μ, as, 1.4{[ ⟨ϕ_x(v), -∇lnπ(x)⟩=-⟨ϕ_x(F_x(v)), -∇lnπ(x)⟩; μ(v)=μ(F^-1_x(v)) ].. Such symmetry is sufficient to meet the flow conservation condition (<ref>) in a detailed manner. To get such mapping, μ is typically chosen uniform or Gaussian so that ϕ_x(v) is rotationally invariant and F_x typically codes at the level of ϕ_x(v) for a full flip, a reflection across the potential gradient -∇lnπ or some other particular symmetry exploiting the ones of ∇lnπ as the pairwise mirror symmetry or translational invariance in particle systems. In particular, it is the existence and exploitation of such underlying symmetry which has allowed to design deterministic Markov kernels of the type δ(F_x(v)-v')v', in comparison to the direct kernels previously discussed. Further generalizations. The derived necessary flow symmetry (<ref>) can actually be alleviated when considering more general processes. From a general perspective, it is most of the times not possible to obtain without rejections direct samples from π, except up to some known symmetries of π as for instance done in overrelaxation moves in spin systems. Therefore, focus has been on developing Markov kernel Q only updating the lifting variable v. We here consider more general kernels potentially proposing also updates of the physical x variable. First, the invariance condition then yields, in the bulk, (⟨ϕ_x(v), ∇lnπ(x)⟩ + λ(x,v))μ(v) π(x) = ∫_Ω×𝒱λ(x',v')Q((x',v'), (x,v))π̣(x')μ̣(v'), and at the boundary, ⟨ϕ_x(v), n(x)⟩_- μ(v) π(x) =∫_∂Ω×𝒱Q_b((x',v'), (x,v))⟨ϕ_x(v'), n(x')⟩_+π̣(x')μ̣(v'). So that the conservation of the probability flows actually require, by integrating over (x,v) either in (<ref>) or (<ref>), ∫_∂Ω×𝒱⟨ϕ_x(v), n(x)⟩π̣(x)μ̣(v)=0. This condition is a global one, compared to the local (<ref>), and is non-restrictive in the case where ∂Ω=∅ or when π already admits some symmetry leading to, for any e∈𝒮, ∫_∂Ω⟨e, n(x)⟩π̣(x)=0, as is the case of the particle systems presented in the previous section, which presents the pairwise mirror symmetry (∇_x_i r(x_i,x_j)=-∇_x_jr(x_i,x_j)). Otherwise, the conservation condition imposes on the choice of the flow ϕ and distribution μ a global symmetry of the exit (⟨π(x)μ(v)ϕ_x(v),n(x)⟩_+) and entering probability flow (⟨π(x)μ(v)ϕ_x(v),n(x)⟩_-) along the boundary. Compared to reversible schemes, this condition also directly stems from the non-reversible continuous-time ballistic component of the process, which pushes it up to the boundary ∂Ω. Eventually, the conservation or equivalently symmetry condition (<ref>) could furthermore be suppressed if the Markov kernel Q (resp. Q_b) was even more general and for instance allowed to propose jumps from the bulk (resp. the boundary) to the boundary (resp. the bulk). §.§ Translational ECMC for isotropic particle systems We now present commonly-used ECMC schemes in particle systems, as illustrated in Fig. <ref>. They are characterized by: Transport. The lifting variable v identifies with a tuple (e,i) of a vector e in ℝ^d and the label i∈ of the updated sphere along said vector. Standard choices for μ is a product measure μ_e⊗μ_n, with μ_e the uniform distribution over [u_1,,u_d] (moves along the canonical basis of ℝ^d, as done in <cit.>) or over 𝕊^d-1 (e is some unit vector, as done in <cit.>) or a Normal distribution over ℝ^d (as used in <cit.>) and μ_n the uniform distribution over . The deterministic flow ϕ identifies with the translational flow ϕ^T which updates (x,v) by a translation of the i-th sphere along e, i.e. ϕ(x,(e,i)) = ((0,,0, e_i-th,0,,0), (0,0)). The conservation condition (<ref>) is met, in particular thanks to the pairwise symmetry in the case of μ_e the uniform distribution over the canonical basis. Event rate. The rates are factorized along the pairwise interactions, i.e., λ(x,v)Q((x,v), ·)=∑_j≠ iλ_ij(x,v)Q_j((x,v), ·), with, λ_ij(x,(e,i)) =β2⟨∇ u(r(x_i,x_j)), e⟩_+ =β2| u'(r(x_i,x_j))|⟨n_ji, e⟩_+, with the pairwise normalized gradient n_ij(x)=2∇_x_ir(x_i,x_j)=-n_ji(x). Such superposition allows to treat every interaction independently. It can also ease the sampling of the Poisson process, either by inversion sampling <cit.> or by thinning <cit.>, as mentioned previously. The factorized flow conservation condition (<ref>) is also met, making it possible to consider a kernel Q similar to (<ref>). Markov kernel. Thanks to the existence of a mapping F_x as in (<ref>) consisting in exchanging particle labels, it is possible to design deterministic kernels for (Q_j)_1≤ j≤ N, as the straight Q^S or reflection kernel Q^R <cit.>, 1.4{[ Q^S_j((x,(e,i)),(e',i'))=_{j}(i')_{e}(e'); Q^R_j((x,(e,i)),(e',i'))=_{j}(i')_{R_x(e)}(e'),; ]. with R_x(e) = -e+2⟨n_ij(x), e⟩n_ij(x). At an event involving the moving sphere i and fixed one j, such kernels updates the moving sphere from i to j, either by keeping the same direction e or reflecting it. Now, in the case of a rotationally-invariant μ_e(·), as the uniform distribution over 𝕊^d-1 or some Normal distribution over ℝ^d, the direct kernel variant (<ref>) is actually explicit <cit.>. By expressing e in spherical coordinates with n_ij(x) as a polar axis (⟨n_ij(x), e⟩ = rcosθ_1), μ_e(e) ∝ r^d-1μ_e,r(ṛ)sin^d-2(θ_1)sin(θ_d-2)θ̣_1θ̣_d-1, it yields the following event distribution, ∫_A(e)⟨e, n_ji⟩_+ μ_e(e) ∝∫_0^1 b^d-2ḅ∫_ℝ^+ r^d-1μ_e,r(ṛ) _A((r√(1-b^2)n_ij(x) + rbe-⟨n_ij(x),e⟩n_ji(x)||e-⟨n_ij(x),e⟩||,i)). Thus, in the case of the uniform distribution over the hypersphere, a pair (e,i) where e is uniformly oriented in the hyperplane Span{n_ij(x)}^⊥ and admits arcsin (b) as a polar angle to n_ji(x), follows the event distribution. In the case of a Gaussian distribution, ⟨e, n_ji(x) ⟩ should follow a χ-2 law and the other components of e the usual Gaussian one. In the following, we will consider the uniform distribution over 𝒮^d-1 and the corresponding following direct kernel, sampling from the event distribution combined with the label flip, Q_j^Dir((x,(e,i)),A) =∫_0^1 b^d-2ḅ ×_A((√(1-b^2)n_ji(x) + be-⟨n_ij(x),e⟩n_ij(x)||e-⟨n_ij(x),e⟩||,j)), which transfers the ballistic move from the sphere i to the j-th one, keeps the same direction in the hyperplane orthogonal to n_ij(x) but, compared to the deterministic kernels, update the polar angle directly from its steady-state distribution, via a sampling of b∈[0,1], typically by inversion sampling, as, b = ν^1/(d-1), with ν∼Unif(0,1). Finally, out of completeness, in the case of the uniform distribution over the canonical basis, a direct sampling is enforced by, Q_j^Can((x,(e,i)), e')=_{u_k}_k(e') ×⟨n_ji(x),e'⟩_+/∑_k=1^d|⟨n_ij(x),u_k⟩|e', which can be sampled by inversion sampling. Boundary kernel. A boundary kernel Q_b is necessary to take into account the hardcore interactions. As detailed in <cit.> for particle system and more generally in the previous subsection, the boundary kernel can be chosen to identify with Q. Simply put, hardcore interactions can be understood as soft ones but with diverging λ_ij→∞. § GENERALIZED FLOW IN ECMC §.§ Necessary and sufficient symmetries for invariance Since its first developments, the translational flow ϕ^T has always been the one implemented in ECMC sampling. However, the PDMP formalism allows for more general flow: if the ODE d(x_t,v_t)/dt=ϕ((x_t,v_t)) with differentiable drift ϕ defines a càdlàg (in time) (ϕ_t)_t≥0 deterministic flow satisfying the semigroup property (ϕ_t+s=ϕ_t∘ϕ_s), then the drift part of the generator (<ref>) is well defined. It may however impact the conservation condition and the balance required for invariance between the transport and event/boundary contributions. Invariance. Indeed, in the general case, the integration by part on the transport formally generates additional terms as the divergence term ∇·ϕ and derivative ⟨ϕ, ∇μ(v)⟩, see for example <cit.> and as we derive and extend the invariance condition to constrained domain as imposed by hardcore interactions below. In the following, with a slight abuse of notation when denoting ∇_v f(x,v), the gradient operator concerns only the continuous part of v, which is the only part of the lifted variable that could be subject to the flow ϕ(x,v), However the flow ϕ may depend both on the continuous and discrete parts of the lifting variable v, e.g. when v denotes the velocity and label of the active particle. By integration by parts, ∫_Ω×𝒱⟨ϕ(x,v), ∇ f(x,v)⟩π̣(x)μ̣(v)= ∫_∂(Ω×𝒱)π̣(x)μ̣(v)f(x,v) ⟨ n(x,v), ϕ(x,v) ⟩ -∫_Ω×𝒱π̣(x)μ̣(v) f(x,v) [ ⟨∇, ϕ(x,v)⟩ +⟨ϕ(x,v), ∇ln(π(x)μ(v)) ⟩]. As ϕ(x,v) = (ϕ_x(x,v), ϕ_v(x,v)) , it leads to the invariance conditions, for (x,v)∈Ω×𝒱, (⟨∇_x, ϕ_x(x,v) ⟩+⟨∇_v, ϕ_v(x,v) ⟩ +⟨ϕ_x(x,v),∇lnπ(x)⟩+⟨ϕ_v(x,v),∇lnμ(v)⟩ +λ̃(x,v))μ(v) = ∫_Ω×𝒱λ̃(x,v')Q̃((x,v'), v)μ̣(v') and, for (x,v)∈∂(Ω×𝒱), ⟨ n(x,v), ϕ(x,v)⟩_- μ(v)= ∫_∂(Ω×𝒱)Q̃_b((x,v'),v) ⟨ n(x,v'), ϕ(x,v')⟩_+μ̣(v'), Necessary symmetries. Thus, considering a more general flow ϕ than the translation ϕ^T a priori imposes different necessary conditions as the ones derived in Sec. <ref>, a priori leading to different rates λ̃ and event kernel Q̃ from the ones derived in Section <ref>, in order to maintain the correct target distribution π⊗μ invariant. The condition on the boundary (<ref>) is however not impacted, e.g. the boundary kernel Q̃_b obey the condition (<ref>) but for a generalized flow and can be set to a valid choice Q from the translational case. First, looking at the conservation condition as obtained in (<ref>), it is now updated to, 1.4{[ ∫_𝒱(⟨∇, ϕ(x,v) ⟩+⟨ϕ_x(x,v), ∇lnπ(x)⟩; + ⟨ϕ_v(x,v), ∇lnμ(v)⟩) μ̣(v) = 0; ∫_𝒱⟨ϕ(x,v), n(x,v)⟩μ̣(v) = 0 ]., and the positivity condition imposing the minimal value of the rates λ̃, as obtained in (<ref>), is now updated to, 1.4[ λ̃_M(x,v) =[⟨ϕ_x(v), -∇lnπ(x)⟩-⟨∇, ϕ(x,v) ⟩; + ⟨ϕ_v(x,v), -∇lnμ(v)⟩]_+. ] As for the translational case, the necessary conditions are imposing the same structure for the choice of rates λ̃ and kernel Q̃, as detailed in (<ref>), (<ref>) and (<ref>) but updated to the minimal-rate condition (<ref>). As done for the translational case, we now discuss more restrictive but explicit classes of valid flows, which obey some further symmetries than the necessary ones. Ideal flows. Interestingly, it is possible to decrease the event rate minimal value to 0, i.e. to design a flow such that no events are needed to target the correct invariance distribution. A null minimal rate is equivalent to, ⟨∇, ϕ(x,v) ⟩≥ ⟨ϕ_x(v), -∇lnπ(x)⟩ + ⟨ϕ_v(x,v), -∇lnμ(v)⟩. Once combined with the conservation condition (<ref>), the inequality is necessarily tight, so that, 1.4[ ⟨∇, ϕ(x,v) ⟩=; ⟨ϕ_x(v), -∇lnπ(x)⟩ + ⟨ϕ_v(x,v), -∇lnμ(v)⟩, ] which can be summarized as, ⟨∇, ϕ⟩=⟨∇ℋ , ϕ⟩. with ℋ(x,v) = -(lnπ(x) + lnμ(v)), the Hamiltonian of the extended system. It is analogous to a continuity equation and the incompressibility nature of the probability current e^-ℋϕ, leaving the Boltzmann distribution stationary. Any ideal flow then presents a compensation between an increase/decrease of the energy of the extended state along ∇ℋ and an expansion/compression of the infinitesimal volume element, in order to preserve the total probability to be in this volume element. A particular case then is the one of ideal solenoidal flows which are volume-preserving on top of being probability-preserving. They then obey the following condition, ⟨∇, ϕ⟩= 0 =⟨∇ℋ , ϕ⟩. Naturally, setting ϕ as the Hamiltonian flow ϕ^H = (∇_vℋ,-∇_xℋ) appears as a sufficient choice to ensure the correct Boltzmann invariant distribution with a null event rate, but its implementation is not tractable in almost all cases of interest. More generally, Hamiltonian flows belong to a larger class of solenoidal flows, which can be characterized as, ϕ(x,v) = A∇ℋ, with A some skew-symmetric matrix. Uniform-ideal flows. In the case of a non-ideal flow, sampling the Poisson process with minimal event rate (<ref>) may imply more involved computations than in the translational case (<ref>). Computations may be eased by considering different degree of superposition, e.g., 1.4[ (a) ⟨∇(ln(π(x)μ(v)) + ∇, -ϕ(x,v) ⟩_+; (b) ⟨∇(ln(π(x)μ(v)),-ϕ(x,v) ⟩_+ +⟨∇, -ϕ(x,v) ⟩_+; (c) ⟨∇lnπ(x),-ϕ(x,v) ⟩_+ +⟨∇lnμ(v),-ϕ(x,v) ⟩_+; +⟨∇, -ϕ(x,v) ⟩_+; (d) … , ] and implementing a thinning procedure on the different terms. While these different factorization schemes (<ref>) may indeed ease the numerical implementations, it may be more advantageous to introduce a generalized flow to decrease the averaged number of events, down to the ideal case. However, any flow choice involving a precise knowledge of ∇lnπ will likely be intractable, except in some particular simple cases. Then, a trade-off lies in designing flows which do not increase the event rate past its physical component (<ref>), which is null for instance in presence of only hardcore interactions. We then call such flows uniform-ideal and they must obey the following condition for any (x,v), ⟨∇, ϕ(x,v) ⟩ + ⟨∇_vlnμ(v), ϕ_v(x,v) ⟩ =0. It indeed identifies with the condition (<ref>) for uniform π as in hardcore models, making such flow ideal in this case. Any ECMC with such generalized flow then comes down to the well known case of translational flow, as the conditions of flow conservation (<ref>) and minimal event rate (<ref>) now identify with the translational ones, respectively (<ref>) and (<ref>), so that one can respectively set λ̃ and Q̃ to λ and Q as obtained in the translational case. Similarly to the ideal case, solenoidal flows as, ϕ(x,v) = A(∇_xf(x),∇_vlnμ(v)), with A some skew-symmetric matrix and f:𝒮→𝒮 some function obeys (<ref>). It identifies with a rotation in the case of μ Gaussian and f(x)=x^2. In the case of a uniform distribution μ, any solenoidal flow is valid, which includes rotational flows. Hybrid flows. The PDMP framework allows for a great flexibility. Thus, a direct generalization consists in implementing sequentially different flows, as a translational flow and a rotational one, at the price of an additional lifting variable indicating the considered flow (and corresponding event rate λ̃ and Markov kernel Q̃ if necessary). This lifting variable, and hence the implemented flow, can then be resampled at the events or refreshments. § ECMC WITH ROTATIONAL FLOW As rotational flows constitute a large class of valid uniform-ideal flows, we describe in the following their implementations in hard-sphere and hard-dimer systems. The generalization to soft interactions is straightforward, as rotational flows, being uniform-ideal, can directly be implemented in the presence of interactions or an external potential by replacing the null event rate λ to ⟨ϕ(x,v),-∇lnπ(x)⟩_+. §.§ Isotropic particle systems: Sphere systems Instead of the translational flow as described in Section <ref> parameterized by a lifting variable of the form (e,i)∈𝕊^1×{1,,N}, we now consider rotational ones, parameterized by the lifting variable v=(e, l,i)∈𝕊^1×ℝ×{1,,N}. As for translations, e corresponds to the infinitesimal update applied to x_i but which adds up to a rotation around a center located at a distance |l| from x_i. As rotational flows are uniform-ideal, we can use the choice for λ and Q_b described in Section <ref>. Therefore we set, 1.4{[ ϕ_x(v) = (0,…,e_i-th,…,0); ϕ_v(x,v)=(1lAe,0,0); λ(x,θ,v)=0; μ(e,l,i)= 12π_𝕊^1(e)ν(l)1/N_{1,…,N}(i); ]. , with the infinitesimal rotation matrix A=([ 0 -1; 1 0 ]). Such flow is indeed uniform-ideal, as ⟨∇, ϕ⟩=0. It is noteworthy that there is no constraint on the distribution of the auxiliary variable l, setting the distance between the rotation center and x_i but also the anticlockwise or clockwise nature of the rotation. Thus, one can set ν(l) so that the generated rotations are always clockwise and at a fixed distance |l| or so that ν(l) is some Gaussian distribution. Such flexibility offers then many possibilities to optimize the flow to a given problem. Regarding the kernel Q_b, as was previously mentioned, it can be set to a deterministic kernel (<ref>) or to the direct one (<ref>). As the flow ϕ_x is directly parameterized in terms of an infinitesimal vector e, the implementation is straightforward as it comes down to resampling e. Here again the choice of l does not have an impact. Finally, we study the numerical performances of implementing non-reversible rotations in sphere systems in Section <ref>. §.§ Anisotropic particle systems: Dimer systems The hard-dimer system, modeled closely after hard spheres, provides a simple setting to study the rich behavior and the phase transitions of anisotropic particles <cit.>. Such particles then require some degree of rotation and we now devise a ECMC schemes generating them in a completely non-reversible manner. Dimer configurations. We now consider a system of N hardcore dimers of radius σ in a periodic L-box. Their configurations are described by the following state variable, (𝐱,θ)∈ (ℝ/(Lℤ))^2N× (ℝ/(πℤ))^N, where x is the sequence of dimer centers and θ the one of angles. We furthermore introduce the following notations for the position of the two monomers constituting one dimer, x_i^+= x_i + σδ_∥(θ_i) , x_i^-= x_i - σδ_∥(θ_i). with δ_∥(θ) = (cos(θ),sin(θ))∈𝕊^1 for θ∈[0,π]. The valid configurations forming up the set Ω satisfy the following constraint for every pair of dimers, d(x^±_i , x_j^±) > d_pair=2σ, Now, similarly to the sphere case, in order to devise an ECMC sampling, we first extend the configurations to (x,θ,v) with v=(e_+,l_⊥,i)∈𝕊^1×ℝ×{1,…,N} where i is the label of the updated dimer, e_+ the update vector of the +-monomer and l_⊥ is the variable setting, respectively to e_+ and x^+_i,x^-_i, the update vector e_- of the --monomer, so that (<ref>) is obeyed, i.e., e^- = e^+ +l_⊥δ_⊥(θ_i), with δ_⊥(θ)=(-sinθ,cosθ). It is naturally possible to devise a scheme directly at the dimer level, updating x_i and θ_i. However, introducing the infinitesimal update of each monomer allows to address the dimer constraint more easily and to later directly use the kernels devised in the translational case in Section <ref>. Furthermore, contrary to sphere systems, the dimer constraint imposes a correlated update of each of its monomer and it leads to requirements on the choice of l_⊥. For instance, setting l_⊥ to some non-zero fixed value cannot give a valid scheme, as each dimer can only rotate in a single manner, as enforced by (<ref>), and flows are not conserved and (<ref>) not satisfied, as there is no label flip symmetry. The scheme could be correct at the cost of introducing both clockwise and anticlockwise rotations and proposing, at collision, to backtrack the rotation of the updated dimer. Therefore we now derive the required distribution of l in order to ensure a completely non-reversible process. As it is necessary for ergodicity to rotate the dimers, the monomer update vectors e_+ and e_- should cover the whole hypersphere 𝕊^1 and therefore we choose for e_+, and by symmetry for e_-, the uniform distribution on 𝕊_1, another possible choice being a Gaussian distribution for instance. Doing so, it constrains the possible choice of l_⊥ to 0 (i.e. translation) or to a fixed value depending on θ_i, which we obtain, parameterizing for the i-th dimer, e_+ = (cos(θ_i+α),sin(θ_i+α)), α∈[0,2π], as, e_- = (cos(θ_i-α),sin(θ_i-α)), l_⊥ = 2sinα. Thus, any normed choice (e_+,e_-) corresponds to a rotation of the dimer around a center placed on the dimer bisector, hence the derivation of what we call bisector rotational flow, as now detailed and illustrated on Fig. <ref>. Bisector rotational flow. Out of simplicity, we now set v=(α,i)∈[0,2π]×{1,…,N}, so that the ECMC sampling by bisector rotational flow is completely characterized by, 1.4{[ ϕ_x(θ,v) = (0,…,cosαδ_∥(θ_i)_i-th,…,0); ϕ_θ(v) = (0,…,sinα/σ_i-th,…,0); ϕ_v(x,v)=0; λ(x,θ,v)=0; μ(α,i)= 1/2π_[0,2π](α)1/N_{1,…,N}(i); ]. , and any choice of Q_b corresponding to a valid one for sphere systems, up to the mapping of α to e_+ in (<ref>) or e_- in (<ref>), depending on which monomer is involved in a collision, and then mapping back from the resampled e'_± to α'. From an algorithmic point of view and collision computations, it may however be easier to work with a parametrization directly based on the rotation center. We then set v=(l, γ,i)∈ℝ×{-1,1}×{1,…,N} to fix the rotation center (x_i + lδ_⊥(θ_i)) and sign (γ=+1,-1 for respectively anticlockwise, clockwise), so that, 1.4{[ ϕ_x(θ,v) = (0,…,γ l√(l^2+σ^2)δ_∥(θ_i)_i-th,…,0); ϕ_θ(v) = (0,…,γ√(l^2+σ^2)_i-th,…,0); ϕ_v(x,v)=0; λ(x,θ,v)=0; μ(l,γ,i)= 1/πσ/l^2+σ^21/2_{-1,1}(γ)1/N_{1,…,N}(i); ]. , leading to the following mapping to e_± for the i-th dimer, {1.4[ e_+= γ√(l^2+σ^2)(lδ_∥(θ_i)+σδ_⊥(θ_i)); e_-=γ√(l^2+σ^2)(lδ_∥(θ_i)-σδ_⊥(θ_i)) ]., which can be used to implement Q_b, as previously mentioned. For both parametrization, checking that the flow is uniform-ideal (⟨∇,ϕ⟩=0) is straightforward. As can be seen from (<ref>), the correct distribution of rotation center is not trivial and especially not uniform in space. Indeed, the distance l between the dimer and rotation centers actually follows a Cauchy distribution, so that high values for l are not rare and should be treated numerically with care. In particular, the situation l=0 correspond to self-rotations, while the value l→±∞ codes for translations. Thus all possible single-dimer rotations with normed monomer velocities can indeed be generated. We study the numerical performances of implementing non-reversible rotations in dimer systems in Section <ref>. § NUMERICAL EXPERIMENTS §.§ Rotational flows in hard-sphere systems We first study rotational flows for isotropic hard spheres, as presented in Section <ref>. We consider a system of N spheres of radius σ in a periodic box of size L at the vicinity of the liquid-hexatic phase transition <cit.>, setting ρ = N πσ^2 / L^2= 0.708. In order to estimate the relaxation time, we consider the decorrelation of the a priori slowest observable, here being Ψ_6 the global orientational parameter <cit.>. It is defined as the average over each sphere j of the average of the angles of the bonds ϕ_jk with its n_j nearest neighboring particles, i.e., Ψ_6 = 1/N∑_j=1^N 1/n_j∑_k=1^n_jexp(6iϕ_jk) . We compare two translational schemes, respectively along +x,y with a straight kernel () and along all directions in 𝕊^1 with a direct kernel (), and two rotational schemes generating all clockwise rotations of radius ℓ=4σ, respectively with a straight kernel () and a direct one (). At fixed-time refreshments, all of the lifting variables are drawn again according to their invariant measure. As all schemes have constant velocity, it corresponds to a fixed refreshment distance d_ref. First, it clearly appears on Fig. <ref> that the scheme requires some fine-tuning in order to set d_ref to an optimal value, as already observed <cit.>. On the contrary, all other schemes (, , and ) show better decorrelation performance, as the refreshment time increases. This is a particularly interesting behavior for the scheme, as it becomes a purely deterministic process as d_ref→∞ but apparently still yields an ergodic and efficient scheme. Then, while the impact of the refreshment time greatly depends on the considered scheme, the scaling of the integrated autocorrelation time of Ψ_6 with the number of particles N appears similar for all four schemes, as shown in Fig. <ref>. Finally, regarding the schemes generating rotational flows, we found that no gain in performance was achieved by introducing switches at events between clockwise and anti-clockwise rotations. Also, the choice of the rotation radius ℓ appears not critical, as can be seen on Fig. <ref>, where only the scheme shows some ℓ-dependence and a slightly better performance than all the other fine-tuned schemes for ℓ∼ L. §.§ Bisector rotational flows in hard-dimer systems We now study the efficiency of rotational flows in the anisotropic case of the hard-dimer system, as described in section <ref>. The system consists in N=32 dimers of monomer radius σ in a periodic box of size L. Building on the Monte Carlo simulations of <cit.> and molecular dynamics simulations of <cit.>, simulations are performed at densities ρ= 2N πσ^2 / L^2= 0.5 and ρ = 0.7, the latter being slightly under the observed phase transition. As commonly studied in isotropic-nematic transition <cit.>, we now consider the relaxation of S_2, the scalar parameter for the two-dimensional nematic order, S_2 = 1/N∑_j=1^N ( 2 cos^2(θ_j - θ̅) -1 ) ∈ [0,1], where θ̅ is the angle of the director, defined modulo π. This observable has an intrinsic dimer-flip symmetry θ_j ↦θ_j+π. Other observables, with or without the dimer-flip symmetry, can be constructed with the θ_j's. They are presented in Appendix <ref> and show a consistent behavior. In <cit.>, the Ψ_6 of the equivalent sphere configuration is investigated, but we found that the global order of the θ_i's shows longer correlation times at the considered densities. We consider two different algorithms: The scheme with a bisector rotational flow, as introduced in Sec. <ref>, and the one with a flow switching at event between translational and bisector rotational flows. Both are studied combined either with the straight and direct kernels. We compare them to the algorithm introduced in <cit.> consisting in a ECMC scheme combined with Metropolis rotation proposals. In more details, the ECMC flow consists in translational moves, with all directions allowed. At each refreshment, a random number n of dimer rotations are also proposed. Their number n follows a geometric distribution of parameter p_chain and the proposed rotation angle increments are uniformly picked over [-Δ, Δ], Δ being tuned at Δ=π for ρ=0.5 and Δ=0.4 for ρ=0.7. Following <cit.>, p_chain is taken so that on average there is as many ECMC events as there are Metropolis proposals and both kind counts as one event computation. In addition to the straight kernel used in <cit.>, we also simulate the scheme with a direct kernel. At fixed-time refreshments, all lifting variables are fully drawn again according to their invariant measure. As all schemes have constant velocity, it corresponds to a fixed refreshment distance d_ref. As shown in Fig. <ref>, the autocorrelation function for S_2 appears close to an exponential decay, and in particular does not exhibit different time scales. Therefore, the integrated autocorrelation time is a well-suited measure of the decorrelation and is displayed in Fig. <ref>. For the dilute case ρ=0.5, the scheme shows an optimal value at finite d_ref whereas the and schemes reach an optimum for vanishing refreshment d_ref→∞. The direct schemes outperform their straight counterparts and the fastest decorrelation is achieved with the algorithm (speed-up of ∼ 3.0 compared to ECMC+Met, the achieves a speed-up of ∼ 2.1). Similarly to the rotations in sphere systems, the and schemes with straight kernel turn into purely deterministic processes as d_ref→∞. At the denser density of ρ = 0.7, similar observations can be drawn, regarding the direct versions of the algorithms better performing and the d_ref-tuning requirement only for the algorithms. However, the scheme now performs worse than the scheme (speed-up of ∼ 0.74)) and the algorithm shows the fastest decay (speed-up of ∼ 2.6). Such behavior could be explained from the observed collision loops in this denser regime, where a rotating dimer i will collide with another dimer j, which will collide in turn with i and so on. There, introducing a flow switch between rotations and translations seems to control the appearance of such loops. § CONCLUSION Since the introduction of the ECMC method, important algorithmic and analytical efforts have been made towards the generalization of such schemes to any systems. Drawing on that line, the PDMP characterization of ECMC schemes allows us to derive the necessary fundamental symmetries of the flow conservation and minimum event rate, as required for invariance of the correct distribution π. It then makes clearly appear the more restrictive but sufficient symmetries most commonly imposed in order to derive explicit schemes. From there, the introduction of generalized flow is carried on in the same manner, by a careful characterization of necessary and sufficient symmetries. In particular, we define two classes of flows of interest: the ideal one, where events are suppressed, and the uniform-ideal ones, for which one can use the valid choices of rates and kernels, as developed and known for the usual translational flow. In particular, the uniform-ideal class includes rotational flows and we devise ECMC schemes generating completely non-reversible rotations in bidimensional hard-sphere and hard-dimer systems. While the rotations can be simulated in many different ways in hard spheres, the contact constraint of the two monomers of each single dimer imposed strict requirements on the possible rotations, which reduce to the single bisector rotational flow in the case of normed monomer velocities. Importantly, the numerical simulations indicate, at least for meaningful observables and high densities, that for rotational flows the refreshment mechanism does not require some crucial fine tuning and may actually not be necessary for ergodicity, even for schemes then completely deterministic. This is reminiscent of numerical observations of similar behavior for translational ECMC with a reflection kernel in hard-sphere systems <cit.>. Proving such property constitutes a major mathematics challenge, as the question of proving ergodicity of ECMC schemes with refreshment for hard-particle systems is already not trivial <cit.> and an ergodicity proof with no refreshment, but with some conditions on the potential, is only known for the so-called Zig-Zag process <cit.>. In lack of such proof, a refreshment distance much larger than L could however be chosen. Such possibility of achieving ergodicity without any refreshment would indicate how the chaotic nature of the considered system itself can be harnessed into producing efficient and robust non-reversible schemes. Thus, comparing the performances observed for the hard-sphere system and hard-dimer one, the latter presenting an additional nematic order, we already can observe how the intrinsic stochastic nature of the system impacts the optimization of the algorithm scheme, as it appears that hard-sphere system, apart from the robustness to the refreshment fine tuning, does not particularly benefit from more involved schemes here close to the transition point. However, one can note that the scheme at ℓ≃ L is better performing, while actually generating some very gradual sequential update of the direction. This behavior is reminiscent of sequential translational straight schemes where the direction is incremented by a fixed value at refreshments, for example in tethered dimer systems <cit.>. On the other hand, in the hard-dimer case, speed up are observed, even for the small considered system size and at the dilute and denser densities. Interestingly, at the denser density, the scheme allows to control the collision loops, which arises from the strong orientational correlation and may slow down the process when considering only bisector rotations. Thus, it appears that developing efficient non-reversible algorithms require a fine understanding of the correlations present in the system. An interesting prospect is to consider the implementations of rotational flows to other anisotropic-particle systems and to study the constraint on such flows as imposed by the particle shape itself and the impact of such anisotropy and the consequential correlations on the algorithm efficiency. Finally, further work should also deal with the efficient numerical implementation of rotational flows, with in addition soft interactions. The challenge here lies in the simulations of the event times, which should computationally benefit from a thinning procedure. More generally, the question of the parallelization of such rotational flows in an efficient manner is an important one as to the applications of such methods to large-scale simulations of molecular systems, all the more as recent parallelizing efforts have been made concerning the standard translational ECMC <cit.>. The parallelization of rotational flows could also allow to study the scaling and evolution of the observed speed-up in hard-dimer for large system sizes, as well as characterize more precisely their critical behavior. § ACKNOWLEDGMENTS All the authors are grateful for the support of the French ANR under the grant ANR-20-CE46-0007 (SuSa project). A.G. is supported by the Institut Universtaire de France. Computations have been performed on the supercomputer facilities of the Mésocentre Clermont Auvergne. 31 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Metropolis et al.(1953)Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller]Metropolis_1953 author author N. Metropolis, author A. W. Rosenbluth, author M. N. Rosenbluth, author A. H. Teller, and author E. Teller, title title Equation of state calculations by fast computing machines, https://doi.org/10.1063/1.1699114 journal journal The journal of chemical physics volume 21, pages 1087 (year 1953)NoStop [Frenkel and Smit(2001)]Frenkel_2001 author author D. Frenkel and author B. Smit, https://doi.org/10.1016/B978-0-12-267351-1.X5000-7 title Understanding molecular simulation: from algorithms to applications, Vol. volume 1 (publisher Elsevier, year 2001)NoStop [Janke(2002)]Janke_2002 author author W. Janke, title title Statistical analysis of simulations: Data correlations and error estimation, https://www.physik.uni-leipzig.de/ janke/Paper/nic10_423_2002.pdf journal journal Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms volume 10, pages 423 (year 2002)NoStop [Levin and Peres(2017)]Levin_2017 author author D. A. Levin and author Y. Peres, https://pages.uoregon.edu/dlevin/MARKOV/ title Markov chains and mixing times, Vol. volume 107 (publisher American Mathematical Soc., year 2017)NoStop [Hohenberg and Halperin(1977)]Hohenberg_1977 author author P. C. Hohenberg and author B. I. Halperin, title title Theory of dynamic critical phenomena, https://doi.org/10.1103/RevModPhys.49.435 journal journal Rev. Mod. Phys. volume 49, pages 435 (year 1977)NoStop [Swendsen and Wang(1987)]Swendsen_1987 author author R. H. Swendsen and author J.-S. Wang, title title Nonuniversal critical dynamics in monte carlo simulations, https://doi.org/10.1103/PhysRevLett.58.86 journal journal Physical review letters volume 58, pages 86 (year 1987)NoStop [Wolff(1989)]Wolff_1989 author author U. Wolff, title title Collective monte carlo updating for spin systems, https://doi.org/10.1103/PhysRevLett.62.361 journal journal Physical Review Letters volume 62, pages 361 (year 1989)NoStop [Bernard et al.(2009)Bernard, Krauth, and Wilson]Bernard_2009 author author E. P. Bernard, author W. Krauth, and author D. B. Wilson, title title Event-chain monte carlo algorithms for hard-sphere systems, https://doi.org/10.1103/PhysRevE.80.056704 journal journal Phys. Rev. E volume 80, pages 056704 (year 2009)NoStop [Michel et al.(2014)Michel, Kapfer, and Krauth]Michel_2014 author author M. Michel, author S. C. Kapfer, and author W. Krauth, title title Generalized event-chain monte carlo: Constructing rejection-free global-balance algorithms from infinitesimal steps, https://doi.org/10.1063/1.4863991 journal journal The Journal of Chemical Physics volume 140, pages 054116 (year 2014), https://arxiv.org/abs/https://doi.org/10.1063/1.4863991 https://doi.org/10.1063/1.4863991 NoStop [Kampmann et al.(2015)Kampmann, Boltz, and Kierfeld]Kampmann_2015 author author T. A. Kampmann, author H.-H. Boltz, and author J. Kierfeld, title title Monte carlo simulation of dense polymer melts using event chain algorithms, https://doi.org/10.1063/1.4927084 journal journal The Journal of chemical physics volume 143, pages 044105 (year 2015)NoStop [Michel et al.(2015)Michel, Mayer, and Krauth]Michel_2015 author author M. Michel, author J. Mayer, and author W. Krauth, title title Event-chain monte carlo for classical continuous spin models, https://doi.org/10.1209/0295-5075/112/20003 journal journal EPL (Europhysics Letters) volume 112, pages 20003 (year 2015)NoStop [Nishikawa et al.(2015)Nishikawa, Michel, Krauth, and Hukushima]Nishikawa_2015 author author Y. Nishikawa, author M. Michel, author W. Krauth, and author K. Hukushima, title title Event-chain algorithm for the heisenberg model: Evidence for z≃1 dynamic scaling, https://doi.org/10.1103/PhysRevE.92.063306 journal journal Physical Review E volume 92, pages 063306 (year 2015)NoStop [Harland et al.(2017)Harland, Michel, Kampmann, and Kierfeld]Harland_2017 author author J. Harland, author M. Michel, author T. A. Kampmann, and author J. Kierfeld, title title Event-chain monte carlo algorithms for three-and many-particle interactions, https://doi.org/10.1209/0295-5075/117/30001 journal journal EPL (Europhysics Letters) volume 117, pages 30001 (year 2017)NoStop [Michel et al.(2020)Michel, Durmus, and Sénécal]Michel_2020 author author M. Michel, author A. Durmus, and author S. Sénécal, title title Forward event-chain monte carlo: Fast sampling by randomness control in irreversible markov chains, https://doi.org/10.1080/10618600.2020.1750417 journal journal Journal of Computational and Graphical Statistics volume 29, pages 689 (year 2020), https://arxiv.org/abs/https://doi.org/10.1080/10618600.2020.1750417 https://doi.org/10.1080/10618600.2020.1750417 NoStop [Klement and Engel(2019)]Klement_2019 author author M. Klement and author M. Engel, title title Efficient equilibration of hard spheres with newtonian event chains, https://doi.org/10.1063/1.5090882 journal journal The Journal of chemical physics volume 150, pages 174108 (year 2019)NoStop [Vanetti et al.(2017)Vanetti, Bouchard-Côté, Deligiannidis, and Doucet]Vanetti_2017 author author P. Vanetti, author A. Bouchard-Côté, author G. Deligiannidis, and author A. Doucet, title title Piecewise-deterministic markov chain monte carlo, https://arxiv.org/abs/1707.05296 journal journal arXiv preprint arXiv:1707.05296 (year 2017)NoStop [Bierkens et al.(2020)Bierkens, Grazzi, Kamatani, and Roberts]Bierkens_20 author author J. Bierkens, author S. Grazzi, author K. Kamatani, and author G. Roberts, title title The boomerang sampler, in https://proceedings.mlr.press/v119/bierkens20a.html booktitle Proceedings of the 37th International Conference on Machine Learning, series Proceedings of Machine Learning Research, Vol. volume 119, editor edited by editor H. D. III and editor A. Singh (publisher PMLR, year 2020) pp. pages 908–918NoStop [Höllmer et al.(2022)Höllmer, Maggs, and Krauth]Hollmer_2022 author author P. Höllmer, author A. Maggs, and author W. Krauth, title title Hard-disk dipoles and non-reversible markov chains, https://aip.scitation.org/doi/abs/10.1063/5.0080101 journal journal The Journal of Chemical Physics volume 156, pages 084108 (year 2022)NoStop [Klement et al.(2021)Klement, Lee, Anderson, and Engel]Klement_2021 author author M. Klement, author S. Lee, author J. A. Anderson, and author M. Engel, title title Newtonian event-chain monte carlo and collision prediction with polyhedral particles, https://doi.org/10.1021/acs.jctc.1c00311 journal journal Journal of Chemical Theory and Computation volume 17, pages 4686 (year 2021)NoStop [Monemvassitis et al.(2023)Monemvassitis, Guillin, and Michel]Monemvassitis_2023 author author A. Monemvassitis, author A. Guillin, and author M. Michel, title title PDMP characterisation of event-chain monte carlo algorithms for particle systems, https://doi.org/10.1007/s10955-023-03069-8 journal journal Journal of Statistical Physics volume 190, pages 66 (year 2023)NoStop [Davis(1984)]Davis_1984 author author M. H. Davis, title title Piecewise-deterministic markov processes: a general class of non-diffusion stochastic models, https://doi.org/10.1111/j.2517-6161.1984.tb01308.x journal journal Journal of the Royal Statistical Society: Series B (Methodological) volume 46, pages 353 (year 1984)NoStop [Davis(1993)]Davis_1993 author author M. H. A. Davis, https://doi.org/10.1201/9780203748039 title Markov models and optimization, series Monographs on Statistics and Applied Probability, Vol. volume 49 (publisher Chapman & Hall, London, year 1993) pp. pages xiv+295NoStop [Kapfer and Krauth(2015)]Kapfer_2015 author author S. C. Kapfer and author W. Krauth, title title Two-dimensional melting: From liquid-hexatic coexistence to continuous transitions, https://doi.org/10.1103/PhysRevLett.114.035702 journal journal Phys. Rev. Lett. volume 114, pages 035702 (year 2015)NoStop [Wojciechowski et al.(1993)Wojciechowski, Brańka, and Frenkel]Wojciechowski_1993 author author K. W. Wojciechowski, author A. C. Brańka, and author D. Frenkel, title title Monte Carlo simulations of a two-dimensional hard dimer system, https://doi.org/10.1016/0378-4371(93)90033-Z journal journal Physica A: Statistical Mechanics and its Applications volume 196, pages 519 (year 1993)NoStop [Cugliandolo et al.(2017)Cugliandolo, Digregorio, Gonnella, and Suma]Cugliandolo_2017 author author L. F. Cugliandolo, author P. Digregorio, author G. Gonnella, and author A. Suma, title title Phase coexistence in two-dimensional passive and active dumbbell systems, https://doi.org/10.1103/PhysRevLett.119.268002 journal journal Phys. Rev. Lett. volume 119, pages 268002 (year 2017)NoStop [Bernard and Krauth(2011)]Bernard_2011 author author E. P. Bernard and author W. Krauth, title title Two-step melting in two dimensions: first-order liquid-hexatic transition, https://doi.org/10.1103/PhysRevLett.107.155704 journal journal Physical review letters volume 107, pages 155704 (year 2011)NoStop [Strandburg(1988)]Strandburg_1988 author author K. J. Strandburg, title title Two-dimensional melting, https://doi.org/10.1103/RevModPhys.60.161 journal journal Rev. Mod. Phys. volume 60, pages 161 (year 1988)NoStop [Eppenga and Frenkel(1984)]Eppenga_1984 author author R. Eppenga and author D. Frenkel, title title Monte Carlo study of the isotropic and nematic phases of infinitely thin hard platelets, https://doi.org/10.1080/00268978400101951 journal journal Molecular Physics volume 52, pages 1303 (year 1984)NoStop [Bierkens et al.(2019)Bierkens, Roberts, and Zitt]BRZ19 author author J. Bierkens, author G. O. Roberts, and author P.-A. Zitt, title title Ergodicity of the zigzag process, https://doi.org/10.1214/18-AAP1453 journal journal Ann. Appl. Probab. volume 29, pages 2266 (year 2019)NoStop [Qin et al.(2022)Qin, Höllmer, and Krauth]Qin_2022 author author L. Qin, author P. Höllmer, and author W. Krauth, title title Direction-sweep markov chains, https://doi.org/10.1088/1751-8121/ac508a journal journal Journal of Physics A: Mathematical and Theoretical volume 55, pages 105003 (year 2022)NoStop [Li et al.(2021)Li, Todo, Maggs, and Krauth]Li_2021 author author B. Li, author S. Todo, author A. C. Maggs, and author W. Krauth, title title Multithreaded event-chain monte carlo with local times, https://doi.org/https://doi.org/10.1016/j.cpc.2020.107702 journal journal Computer Physics Communications volume 261, pages 107702 (year 2021)NoStop Supplementary Materials Necessary and sufficient symmetries in Event-Chain Monte Carlo with generalized flows and Application to hard dimers § ADDITIONAL DIMER OBSERVABLES The dimer angles θ_j can be seen twofold, either with π- or 2π-periodicity. If opposite directions can be discriminated, the global orientational order is encoded in the average of the directions δ_∥(θ_j) = (cosθ_j, sinθ_j) of all dimers, namely the polarization, p = 1/N∑_j=1^N δ_∥(θ_j) . If the system has an intrinsic dimer-flip symmetry θ_j ↦θ_j+π, the relevant observable stems from the study of the isotropic-nematic phase transition <cit.> in three dimensions. The general matrix order parameter of such a transition is reduced in dimension 2 to a complex number, z_2 = 1/N∑_j=1^N exp(2iθ_j) = S_2 exp(2i θ̅) , where θ̅ is the angle of the director, defined modulo π, and S_2 is the scalar parameter for the two-dimensional nematic order, S_2 = 1/N∑_j=1^N ( 2 cos^2(θ_j - θ̅) -1 ) ∈ [0,1]. In the following, z_2 is called the nematic vector. Its expression is reminiscent of the Ψ_6 for hard spheres and plays the role of averaging in a π-periodic setting. The square box imposes the ensemble averages ⟨ z_2 ⟩ = 0 and ⟨𝐩⟩ = 0, which give a rudimentary check for ergodicity, respectively with and without dimer-flip symmetry. Experiments (Fig. <ref>,<ref>) show the polarization is slower than the nematic vector, but the need to flip all dimers is an artificial constrain in the case where opposite directions are equivalent. The scalar S_2 decorrelates faster than z_2 but the behavior of the algorithms is the same for all observables. At low density, the scheme has an artificial speed-up on the decorrelation of 𝐩 because the proposals are uniform on all angles and the θ_j →θ_j+π transform is always accepted.
http://arxiv.org/abs/2307.03119v1
20230706164540
Learning Multi-Agent Intention-Aware Communication for Optimal Multi-Order Execution in Finance
[ "Yuchen Fang", "Zhenggang Tang", "Kan Ren", "Weiqing Liu", "Li Zhao", "Jiang Bian", "Dongsheng Li", "Weinan Zhang", "Yong Yu", "Tie-Yan Liu" ]
cs.AI
[ "cs.AI", "cs.LG", "cs.MA" ]
These authors contributed equally to this research. This work was conducted during the internship of Yuchen Fang and Zhenggang Tang at Microsoft Research Asia. [email protected] Shanghai Jiao Tong University [1] [2] [email protected] University of Illinois Urbana-Champaign [1] Corresponding author. [email protected] Microsoft Research Asia Microsoft Research Asia Microsoft Research Asia Microsoft Research Asia Microsoft Research Asia [3] [email protected] Shanghai Jiao Tong University Shanghai Jiao Tong University Microsoft Research Asia Order execution is a fundamental task in quantitative finance, aiming at finishing acquisition or liquidation for a number of trading orders of the specific assets. Recent advance in model-free reinforcement learning (RL) provides a data-driven solution to the order execution problem. However, the existing works always optimize execution for an individual order, overlooking the practice that multiple orders are specified to execute simultaneously, resulting in suboptimality and bias. In this paper, we first present a multi-agent RL (MARL) method for multi-order execution considering practical constraints. Specifically, we treat every agent as an individual operator to trade one specific order, while keeping communicating with each other and collaborating for maximizing the overall profits. Nevertheless, the existing MARL algorithms often incorporate communication among agents by exchanging only the information of their partial observations, which is inefficient in complicated financial market. To improve collaboration, we then propose a learnable multi-round communication protocol, for the agents communicating the intended actions with each other and refining accordingly. It is optimized through a novel action value attribution method which is provably consistent with the original learning objective yet more efficient. The experiments on the data from two real-world markets have illustrated superior performance with significantly better collaboration effectiveness achieved by our method. <ccs2012> <concept> <concept_id>10010147.10010257.10010258.10010261.10010275</concept_id> <concept_desc>Computing methodologies Multi-agent reinforcement learning</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Multi-agent reinforcement learning Learning Multi-Agent Intention-Aware Communication for Optimal Multi-Order Execution in Finance Tie-Yan Liu August 1, 2023 =============================================================================================== § INTRODUCTION In quantitative finance, the primary goal of the investor is to maximize the long-term value through continuously trading of multiple assets in the market <cit.>. The process consists of two parts, portfolio management, which dynamically allocate the portfolio across the assets, and order execution whose goal is to fulfill a number of acquisition or liquidation orders specified by the portfolio management strategy, within a time horizon, and close the loop of investment <cit.>. Figure (<ref>) presents the trading process within one trading day. The trader first updates the target portfolio allocation following some portfolio management strategy. Then, the orders shown in the red dotted zone need to be executed to accomplish the actual portfolio adjustment. We focus on order execution task in this paper, which aims at simultaneously finish executing multiple orders during the time horizon while maximizing the overall execution profit gain. The challenge of order execution lies in two aspects. First, the number of orders changes according to the portfolio allocation from day to day, which requires the order execution strategy to be scalable and flexible to support large and various number of orders. Second, cash balance is limited and all acquiring operations will consume the limited cash supply of the trader, which can only be replenished by the liquidating operations. The lack of cash supply may lead to missing good trading opportunities, which urges one to achieve balance between acquisition and liquidation, to avoid conflicted trading decisions that would cause cash shortage and poor trading performance. Figure (<ref>) illustrates a typical example of a conflicted trading decision that results in cash imbalance and low execution utility. The execution of acquisition orders are forced to postpone due to cash shortage until the liquidation orders supplementing the cash, leading to missing the best acquisition opportunity. We have observed similar evidences in real-world transactions and made more analysis in the experiment. Although there exists many works for order execution, few of them manage to address the above three challenges. Traditional financial model based methods <cit.> and some recently developed model-free reinforcement learning (RL) methods <cit.> only optimize the strategy for single-order execution without considering practice of multi-order execution, which would result in low trading efficacy. Moreover, it is not applicable to directly transfer the existing methods to multi-order execution since utilizing only one agent to conduct the execution of multiple orders would lead to scalability issue as the action space of one individual agent grows exponentially with the number of orders. Also, it is either not flexible enough for the execution of varying number of orders <cit.>. To resolve the above challenges, we treat multi-order execution as a multi-agent collaboration problem and utilize a multi-agent reinforcement learning (MARL) method where each agent acts to execute one individual order to factorize the joint action space for scalability to varying order numbers <cit.>, and all agents collaborate to achieve higher overall profits with less decision conflicts. However, the existing MARL solutions for general multi-agent collaboration are not suitable for the multi-order execution environment where the actions of one agent can significantly influence the others through shared cash balance, further affecting the final performance as the financial market changes drastically. The mainstream methods, which build the communication channel among agents to promote collaboration <cit.>, only allow agents to share information of their partial observation, which can not directly reflect the intention of agents, thus harming the collaboration performance. <cit.> models the intentions of agents as imagined future trajectories and share the intentions through communication, in order to achieve better collaboration performance. However, it requires predicting environment dynamics and future actions of others to generate the imagined trajectory, which is intractable especially in the noisy and complicated financial market. Also, the agents herein can not respond to the intentions of others, i.e., changing the actions they intended to take after receiving the messages, until the next timestep, which makes the intention message less helpful for achieving well coordination at the current timestep. In this paper, we propose a novel multi-round intention-aware communication protocol for communicating the intended actions among agents at each timestep, which is optimized through an action value attribution method. Specifically, we first model the intention of agents as the actions they intended to take at current timestep <cit.> and share the intentions between agents through communication. Then, during multiple rounds of communication, the agents are allowed to coordinate with each other and achieve better balance between acquisition and liquidation, whereafter the last intended actions are taken as the final decisions of the current timestep. Thus, note that the intended actions of agents should be gradually refined for better collaboration during multi-round communication. To ensure this, we propose a novel action value attribution method to directly optimize and refine the intended actions at each round, which has been proved as unbiased to the original decision making objective yet more sample efficient. Our contributions are three-fold as discussed below. * We illustrate the necessity of simultaneous optimization for all orders in multi-order execution task. To the best of our knowledge, this is the first work formulating this problem as a multi-agent collaboration task and utilize MARL method to solve it. * We formulate the intention of agents as the actions they intended to take and propose a novel action value attribution method to optimize the intended actions directly. We are the first to explicitly refine the intended actions of agents in a cooperative scenario, which may shed some light on the researches about general multi-agent reinforcement learning. * Our proposed intention refinement mechanism allows agents to share and modify their intended actions before the final decisions are made. The experiment results on two real-world stock markets have demonstrated the superiority of our approach on both trading performance and collaboration effectiveness. § RELATED WORK §.§ RL for Order Execution Reinforcement learning (RL) based solutions are proposed for order execution due to its nature as a sequential decision making task. Early works <cit.> extend traditional control theory based methods <cit.> and rely on unrealistic assumptions for the market, thus not performing well in real-world situations. Several following works <cit.> adopt a data-driven mindset and utilized model-free RL methods to learn optimal trading strategies. However, all these methods target on individual order execution and do not consider practical constraints under multi-order execution as shown in Figure (<ref>), leading to sub-optimal or impractical trading behaviors. Although MARL has been widely adopted in financial area for market simulation <cit.> and portfolio management <cit.>, these is no existing method utilizing MARL directly for order execution. To our best knowledge, this is the first work using MARL for multi-order execution task with practical constraint. §.§ Communication in General MARL Communication is an essential approach to encourage collaboration between agents for multi-agent collaboration problems. Early works <cit.> design pre-defined communication protocols between agents. DIAL <cit.> and CommNet <cit.> first proposed differentiable communication mechanism with deep neural networks. The following works can be divided into two sub-groups. The first group focuses mainly on “who to communicate”, i.e., throttling the communication channel rather than a fully connected global one <cit.>. While the second concentrates on “how to deliver messages”, i.e., designing different network structures for better information passing and aggregation in communication channels <cit.>. However, the messages shared in all these communication methods contain only the information from the observations of agents and do not explicitly reflect their intentions, leading to catastrophic discordance. Our work is orthogonal to these methods since we focus on “what to communicate” and the corresponding optimization algorithm, which can be easily adapted to the communication structure in the works mentioned above to make up for their shortcomings. §.§ Intention Modeling Explicit modeling of the intentions of agents has been used in Theory of Mind (ToM) and Opponent Modeling (OM) methods. ToM-net <cit.> captures mental states of other agents and predicts their future action. OM <cit.> uses agent policies to predict the intended actions of opponents. But all these works are conducted under a competitive setting and require agents to infer each other's intention, which could be inaccurate considering the instability nature of MARL <cit.>. <cit.> first conducts intention sharing between agents through communication mechanisms under cooperative settings. However, the agents are not allowed to modify their action after receiving others' intention at the current step, thus still suffering from discordance. Also, this method requires forecasting the state transitions of the environment of the next few steps, which may suffer from compounding error problems, especially in finance area where the environment is extremely noisy. While our method can improve the joint action of agents gradually through the intention sharing and refinement process, and does not use predicted state transitions or any additional environment information. § PRELIMINARY In this section, we first present the task definition of multi-order execution problem, including notations for variables and optimization goals. Then, we formulate the task as a Markov decision process of the trader interacting with the market environment. §.§ Multi-Order Execution We generalize the typical settings of order execution in previous works <cit.> to multi-order execution, where a set of orders need to be fulfilled within a predefined time horizon. As shown in Figure (<ref>), we take intraday order execution as a running example while the other time horizons follow the same schema. Thus, for each trading day, there are a set of n orders to be traded for n assets, respectively, which are denoted as a tuple ( n, c_0, 𝐝, 𝐌 ). It includes the trading directions of the orders 𝐝=(d^1, …, d^n) where d^i ∈{1, -1} stands for liquidation and acquisition, respectively and order index i ∈ [1,n]; the amount of shares to trade for each asset 𝐌=(M^1, …, M^n); and the initial cash balance of the trading account c_0 which is shared by all the trading operations during the day as explained below. For simplicity, we assume that there are T timesteps in the time horizon, i.e., one trading day. At each timestep t ∈ [1, T], the trader should propose the volumes to trade, denoted as 𝐪_t = (q_t^1, …, q_t^n). The market prices of the corresponding assets are 𝐩_t = (p_t^1, …, p_t^n), which would not be revealed to the trader before her proposing the trading decision at t. During the trading process, for the trader, liquidating operations replenish the cash balance while acquiring operations consume it. As a result, the balance of trading account varies every timestep as c_t = c_t-1 + ∑_i=1^n d^i·(p^i_t· q^i_t). Different from the previous works for order execution that only optimize for a single order, the objective of multi-order execution is to maximize the overall profits while fulfilling all the orders without running out of cash, which can be formulated as follows: _𝐪_1, …, 𝐪_T∑_i=1^n d^i [ ∑_t=1^T (p^i_t· q^i_t) ] / ( ∑_t=1^T q^i_t), s.t.∑_t=1^T𝐪_t =𝐌,   𝐪_t≥ 0, c_t ≥ 0, ∀ t ∈{1,…,T}. The average execution price (AEP) of order i is calculated as p̅^i = [ ∑_t=1^T (p^i_t· q^i_t) ] / ( ∑_t=1^T q^i_t) . The trader needs to maximize the AEP of all liquidation orders (d^i=1) while minimizing that of acquisition orders (d^i=-1). §.§ Multi-Order Execution as a Markov Decision Process The order execution problem can be formulated as a Markov decision process (MDP) as (n, 𝒮, 𝒜, R, I, γ), where each agent executes one order and all the agents share a collective goal. Here 𝒮 is the state space, 𝒜 is the action space for the agent. n is the number of agents corresponding to the order number. Note that, for different trading days, order number n varies dynamically and can be large, which makes the joint action space extremely huge for one single-agent RL policy <cit.> to learn to execute multiple orders simultaneously. Thus, in multi-agent RL schema for multi-order execution, we treat each agent as the individual operator for executing one order. Each agent has a policy π^i(^i;^i) which produces the distribution of actions and is parameterized by ^i. The actions of all agents a = (a^1, ..., a^n) sampled from each corresponding policy is used to interact with the environment and R(, a) is the reward function. I(, a) is the transition function of environment which gives the next state by receiving the action. Our goal is to optimize a unified policy π = {π^1,...,π^n} with parameter = {^1,...,^n} to maximize the expected accumulative reward J()=E_τ∼ p_π(τ)[∑_t γ^t R(_t,a_t)]  , where p_π(τ) is the probability distribution of trajectory τ = {(_1,a_1), …, (_T, a_T)} sampled by π from the environment and γ is the discount factor. More implementation details of the MDP definitions are presented below. Number of agents n We define the agent to be the operator for a single asset, and each agent would be responsible for the execution of the corresponding order. State space 𝒮 The state _t ∈𝒮 describes the overall information of the system. However, for each agent executing a specific order i, the state s_t^i observed at timestep t contains only the historical market information of the corresponding asset collected just before timestep t and some shared trading status. The detailed description of the observed state information has been illustrated in Appendix <ref>. Action space 𝒜 Agent i proposes the action a^i_t∈𝒜 after observing s_t^i at t ∈{1,…,T}. Following <cit.>, we define discrete action space as a^i_t ∈{0, 0.25, 0.5, 0.75, 1} which corresponds to the proportion of the target order M^i. The trading volume q^i_t = a^i_t M^i will be executed at timestep t. Moreover, ∑_t=1^Ta^i_t=1 has been satisfied by fixing a^i_T = (1-∑_t=1^T-1a^i_t) to ensure all orders are fully fulfilled. The similar setting has been widely adopted in the related literature <cit.>. We also conduct experiments on different action spaces and present the results in Appendix <ref>, which shows that this action setting has performed sufficiently well. We should note cash limitation here. If the remained cash balance is not adequate for all acquisition agents, then the intended trading volume of them are cutoff, by environment, evenly to the scale that the remained cash balance will just be used up after this timestep. For instance, if the actions of all the acquisition agents require twice as much as the remained cash, their viable executed volume will be normalized to half, to ensure cash balance c_t ≥ 0 for 1 ≤ t ≤ T. Reward function Different from the previous works for order execution that optimize the performance of each order individually, we formulate the reward of all agents in multi-order execution as the summation of rewards for execution of each individual order, which includes three parts: the profitability of trading, the penalty of market impact and the penalty of cash shortage where the cash balance has been used up and the acquisition actions are limited. First, to account for the profitability during order execution caused by actions, following <cit.>, we formulate this term of reward as volume weighted execution gain upon the average price as R_e^+(_t, a^i_t;i) = d^i q^i_t/M^i·( p_t^i - p^i /p^i) ^price normalization = d^i a^i_t ( p_t^i/p^i - 1 ), where p^i=1/T∑_t=1^Tp^i_t is the average market price of asset i over the whole time horizon. Note that, as discussed in <cit.>, incorporating p^i in the reward function at timestep t will not cause information leakage since the reward has not been included in the state s_t thus would not influence the actions of our agent. It would only take effect in back-propagation during training. Second, to avoid potential market impacts, i.e., too large trading volume that may harmfully influence the market, we follow the recent works <cit.> and propose a quadratic penalty for trading too much of asset i within a short time period as R_a^-(_t, a^i_t;i) = -α (q^i_t/M^i)^2 = -α (a^i_t)^2, where α is a hyper-parameter controlling the penalty degree. Third, we also penalize all the agents whenever the remained cash balance is used up right at timestep t, as R_c^-(_t, a^i_t;i) = -σ1[c_t = 0 | c_t-1 > 0], where σ is the hyper-parameter. Thus, the reward for executing the i-th order is defined as R(_t, a^i_t;i) = R_e^+(_t, a^i_t;i) + R_a^-(_t, a^i_t;i) + R_c^-(_t, a^i_t;i) = d^i a^i_t ( p_t^i/p^i - 1 ) -α (a^i_t)^2 - σ1[c_t = 0 | c_t-1 > 0]. Finally, in order to optimize the execution of all the orders holistically, the overall reward function, which is shared by all the agents, is defined as the average of rewards of all the orders as R(_t, a_t) = 1/n∑_i=1^n R(_t, a^i_t;i)  . Assumptions There are two main assumptions adopted in this paper. Similar to <cit.>, (i) the temporary market impact has been adopted as a reward penalty, i.e., R^-_a, and we assume that the market is resilient and will bounce back to equilibrium at the next timestep. (ii) We either ignore the commissions and exchange fees as this expense is relatively small fractions for the institutional investors that we mainly aim at. § METHODOLOGY In this section, we first briefly introduce the main challenges of solving multi-order execution task. Then, we illustrate how the previous communication based MARL methods fail facing these challenges by describing the design and problems of general framework of existing method, further clarifying the motivation of our improvements. Finally, we introduce the details of our multi-round intention-aware communication with action refinement, including the optimization method. §.§ Problems of General Multi-agent Communication Framework There are two main challenges for solving multi-order execution task with MARL method: (1) The agents have to extract essential information from their own observation, e.g., judge whether it is a good opportunity to trade to derive a high profit. (2) The acquisition and liquidation orders should be coordinated with each other to maintain a reasonable cash balance without severe cash shortage. This requires the agents to be aware of the situation and decision intention of the other agents at the current timestep, and adjust their decisions to avoid potential conflicts, e.g., congested cash consumption. However, the existing multi-agent communication methods are limited by a common inflexible framework and can not solve all these challenges, as summarized and discussed as below. Note that, the following procedure is conducted at each timestep, thus, we omit the subscript t of timestep for simplicity in condition that the context is of clarity. To solve the above challenge (1), at each timestep, the i-th agent would extract a hidden state from its observation ^i with an information extractor E(·) h_0^i = E(^i) . Then, the agents would communicate with each other using a communication channel C(·) and update the hidden states, for totally K rounds for a thorough information exchange, (h^1_k…, h^n_k) = C(h^1_k-1, …, h^n_k-1), 1≤ k ≤ K . Finally, the actions of agents are generated with a decision making module D(·) as a^i ∼ D(h_K^i). The previous works on multi-agent communication have been summarized above. They all focus on improving the structure of communication channels C, either throttling the channel for more efficient communication <cit.> or improving the information passing ability with a more complicated channel design <cit.>. However, none of these approaches breaks through the above framework, and we claim that two main problems exist within this framework. First, all hidden representations h^i_k only contain the information of partial observations of the agents but not the actions they intended to take, making it harder for agents to reach good collaboration. Second, though multiple rounds of communications are conducted during this process, the agents only make decisions once after the final round of communication. They have no opportunity to refine their actions afterward, leading to discordance as it is hard for agents to reach an agreement immediately in a complicated environment like the finance market. These problems combined making existing methods fail to solve the challenge (2) mentioned above, thus not suitable for multi-order execution task. §.§ Intention-Aware Communication In this section, we first describe the framework of our proposed intention-aware communication (IaC) method. Then, we discuss the optimization details including the intended action refinement. §.§.§ Decision making with multi-round intention communication To solve the problems mentioned above, we propose the intention-aware communication, which can be divided into two parts: observation extraction and multi-round communication with decision making. Following the above convention, we by default omit the subscript t in notations without causing misunderstanding since this procedure is conducted during each timestep. The whole process has been illustrated in Figure <ref>. Observable information extraction Similar to the framework described in Section <ref>, from the view of the i-th agent, at each timestep, an information extractor E is utilized to extract the patterns from the input and encode them as an initial hidden representation h_0^i of agent i from its observation ^i following Eq. (<ref>). Multi-round communication with decision making Our central improvement lands in the multi-round communication process. Instead of designing more complicated communication channels, we focus on “what to communicate” and share the intended actions of agents during each round of communication. Also, we make it possible for agents to constantly refine their actions according to the intentions of the others during this process. The process can be formulated as (h^1_k, …, h^n_k) = C(h^1_k-1 || a_k-1^i, …, h^n_k-1 || a_k-1^n), a_k^i ∼ D(h^i_k),      1≤ i ≤ n, 1≤ k ≤ K  , where a_0^i is a dummy action whose value can be arbitrarily assigned, C(·) is the communication channel where agents exchange information and update their hidden states for one round from h_k to h_k+1, and D(·) is the decision making module. The intended actions of the last round are used as the final actions actually executed in the environment as a=a_K in our method. Note that, our proposed method is different from the general framework of the previous communication-based MARL methods described in Section <ref>, which only share the information extracted from the partial observations of agents and make the (final) decisions only once after the last round of communication. The novel parts of our proposed method are emphasized with dashed lines and striped blocks in Figure <ref>. The communication channel C(·) is scalable to varying number of agents following previous methods <cit.>, §.§.§ Intention optimization with action value attribution The intended actions a_k generated after k-th communication round should provide the intention of every agent, which reflects the instant intuition of decision making at the current timestep. Thus, the intended actions a_k should by design reflect the true intentions of agents and be exchanged with each other through the next round of communication to further facilitate collaborative decision making, until the final round as a = a_K. To achieve this, we propose an auxiliary objective for each round of intended actions. All auxiliary objectives are optimized together with the original objective J() defined in Eq. (<ref>) to keep all intended actions highly correlated to the final goal and progressively refine decisions upon round and round. We first introduce some definitions before describing the design of the auxiliary objective in detail. Recall that we set the context of agents at timestep t, we define value function V(_t) = ∑_t'=t^T γ ^t'-t R(s_t',a_t') as the expected cumulative reward we could get starting from state _t following our policy π, and action value Q(_t, a_t) = R(_t, a_t) + γ V(I(_t, a_t)) as the expected cumulative reward if we take action a_t at timestep t following policy π. Denoting the intention generation process during the k-th round of communication as π_k, i.e., π_k(· | , a_k-1)=D(C(h_k-1||a_k-1)), we optimize π_k by defining an auxiliary objective function to maximize the expected cumulative reward Q(, a_k) as if we took a_k as the final actions instead of a_K, for all and a_k-1 encountered during interacting with the environment. Note that, when k = K, this objective is consistent with our original objective J() as a_K are the final actions that are used to interact with the environment. Thus, all our objectives can be uniformly denoted as J_k() = 𝔼_s,a_k-1[𝔼_a_k ∼π_k(·|, a_k-1;)[Q(, a_k)]], 1 ≤ k ≤ K  , where are the parameters of E(·), C(·), D(·). The gradient of J_k w.r.t. is calculated as ∇_ J_k() = 𝔼_s, a_k, a_k-1[∇_logπ_k(a_k | s, a_k-1;) Q(s, a_k)]  . We further expect the intended actions should generally be refined during the communication process, which means a_k should achieve higher return than a_k-1. Therefore, we use the expected return of the intended actions in last round of communication as a baseline function into Eq. (<ref>) where Q(s, a_0)=0, ∇_ J_k() = 𝔼_s, a_k, a_k-1[∇_logπ_k(a_k | s, a_k-1;) (Q(s, a_k) - Q(s, a_k-1))]  . It is reasonable as we would like to encourage a_k to achieve better performance than a_k-1 and penalize those perform worse. Moreover, the policy gradient in Eq. (<ref>) remains unbiased to Eq. (<ref>) since the gradient of the baseline w.r.t. is 𝔼_s, a_k, a_k-1[-∇_logπ_k(a_k | s, a_k-1 ; ) Q(s, a_k-1)] = 𝔼_s,a_k-1[-Q(s, a_k-1) 𝔼_a_k[∇_logπ_k(a_k | s , a_k-1; )]] = 𝔼_s,a_k-1[-Q(s, a_k-1) ∇_1] = 0  . Action value attribution We further clarify the intuition of our auxiliary objective from a perspective of attributing the credit of final decision making across the whole process of intention refinement during each round of communication. First, aiming at encouraging the agents to find better actions than the intended actions exchanged during last round of communication, we define another auxiliary objective for optimizing π_k as J'_k() = 𝔼_s,a_k-1[𝔼_a_k ∼π_k(·|, a_k-1;)[Q(, a_k) - Q(, a_k-1)]]. Taking derivatives of Eq. (<ref>) w.r.t. and considering Eq. (<ref>), we can easily find that ∇_ J_k() = ∇_ J'_k(). Thus, considering the consistency between J_K() and J() we mentioned above, J'_K(), i.e., the auxiliary objective defined over the decisions of the last round, is also consistent with our original target J(). Also, as ∑_k=1^K(Q(, a_k) - Q(, a_k-1)) = Q(, a_K), we can see that the original optimization goal J(), i.e., J'_K here, has been distributed to each J'_k for 1≤ k ≤ K. We can tell that what we are doing is designing an action value attribution method where the value of the last decision a_K is attributed to all the intended actions. Optimization with action value attribution decomposes the final optimization objective to each round of intention-aware communication, which not only alleviates the burden of multi-agent communication optimization, but also improves decision making gradually through learning to promote action value at each round as shown in Eqs. (<ref>) and (<ref>). Action value estimation The last detail is to estimate the action value Q(, a) to calculate ∇_ J_k() in Eq. (<ref>). Normally, for k=K, Q(, a_K) can be directly calculated from the original sampled trajectory as a_K is the final decision utilized to interact with the environment. While, for 1 ≤ k < K, we train an action value estimation model Q̂(, a) upon the trajectories collected by interacting with the environment using the actual decision a_K. Note that, this procedure does not require the environment to provide any additional information about the intended actions, which guarantees generalizability for wider applications of MARL. Overall speaking, we optimize the objective functions J_k() for all communication rounds simultaneously, thus, the final loss function to minimize w.r.t. the parameter is defined as L() = -1/K∑_k=1^K J_k()  . As for implementation, we use PPO algorithm <cit.> to optimize all the intended actions and the final decisions. The overall decision-making and optimization process of our proposed Intention-Aware Communication method is presented in Algorithm <ref>. The detailed network structures of the extractor E(·), communication channel C(·), decision module D(·) and action value estimator Q̂(·, ·) are presented in Section <ref>. § EXPERIMENTS In this section, we present the experiment settings and results with extended investigations. The reproducible codes and benchmark with data will be released upon the acceptance of this paper. §.§ Datasets All the compared methods are trained and evaluated based on the historical transaction data of China A-share stock market US stock market from 2018 to 2021 collected from Yahoo Finance[<https://finance.yahoo.com>]. All datasets are divided into training, validation, and test sets according to time and the statistics of all datasets are presented in Appendix <ref> For both stock markets, we conduct three datasets with a rolling window of one-year length and stride size, denoted as CHW1, CHW2, CHW3 and USW1, USW2, USW3. For each trading day, several sets of orders with the corresponding initial cash budgets {(n, c_0, 𝐝, 𝐌)} are generated according to a widely used portfolio management strategy “Buying-Winners-and-Selling-Losers” <cit.> implemented in <cit.>, and each set of intraday execution orders includes the information about the asset name, the amount and trading type of each order, as discussed in Sec. <ref>. The orders are the same for all the compared methods for fairness. Without loss of generality, all the orders in our datasets are restricted to be fulfilled within a trading day, which is 240-minute length for Chinese stock market and 390-minute for US stock market. The detailed trading process has been described in Appendix <ref>. §.§ Evaluation Settings §.§.§ Compared methods Since we are the first to study the simultaneous optimization for multi-order execution, we first compare our proposed method and its variants with traditional financial model based methods and some single-agent RL baselines proposed for order execution problem. Note that, for single-agent RL methods that are optimized for single-order optimization, instead of using the summation of rewards for all orders as the reward function, the agent is only optimized for the reward of each individual order R(_t, a_t^i; i) as defined in Eq. (<ref>), aligned with how these methods are originally proposed. Then, to illustrate the effectiveness of our proposed novel intention-aware communication mechanism, we compare our method with those general RL works incorporating common MARL algorithms. All methods are evaluated with the same multi-order execution procedure described in Appendix <ref>. * TWAP (Time-Weighted Average Price) <cit.> is a passive rule-based method which evenly distributes the order amount across the whole time horizon, whose average execution price would be the average market price p^i as defined in Eq. (<ref>). * VWAP (Volume-Weighted Average Price) <cit.> is another widely used strategy which distributes the order proportionally to the estimated market volume to reduce market impacts. * AC is derived in <cit.> as a trading strategy focusing on balancing price risk and market impacts using close-form market heuristics. * DDQN (Double Deep Q-network) <cit.> is a single-agent value-based RL method for single-order execution optimization. * PPO was proposed in <cit.> which utilizes PPO algorithm to optimize single-order execution. * CommNet <cit.> first utilizes a neural network as a broadcasting communication channel to share information between agents. * TarMAC <cit.> is a multi-agent reinforcement learning method which first utilizes an attention-based communication channel. * IS <cit.> incorporates intention communication in MARL, which forecasts the future trajectories of other agents as intentions. * IaC is our proposed method, which utilizes an intention-aware communication mechanism to increase the cooperative trading efficacy for multi-order execution. Specifically, for comprehensive comparison, we conduct experiments on two variants of our method, and , which utilize the implementation of communication channel as TarMAC and CommNet, respectively. For all the compared methods, the hyper-parameters are tuned on the validation sets and then evaluated on the test sets. For RL-based methods, the policies are trained with six different random seeds after determining the optimal hyper-parameters and the means and standard deviations of results on test sets are reported. The detailed hyper-parameter settings are presented in Appendix <ref>. All RL-based methods share the same network structures for extractor E(·), communication network C(·) (if exists) and decision module D(·), thus the sizes of parameters are similar, for fair comparison. §.§.§ Evaluation metrics The first evaluation metric is the average execution gain (EG) over all orders which is calculated as EG=1/|𝔻|∑_i=1^|𝔻|EG^i, where |𝔻| is the number of orders in the dataset, and EG^i is the execution gain on order i relative to the corresponding daily average market price p^i of asset i and defined as EG^i = d^i(p̅^i_strategy - p^i/p^i) × 10^4 ‱. Here p̅_strategy^i is the average execution price of the evaluated strategy as defined in Eq. (<ref>). Note that EG is proportional to the reward R_e^+ described in Eq. (<ref>) and has been widely used in order execution task <cit.>. EG is measured in basis points (BPs), where one basis point is 1‱. To better illustrate the profit ability, we also report the additional annualized rate of return (ARR) brought by the order execution algorithm, relative to the same portfolio management strategy with TWAP execution solution whose average execution price is p^i and EG=0, whose detailed calculation is presented in Appendix <ref>. Following <cit.>, we also report the gain-loss ratio (GLR) of EG 𝔼_i[EG_i | EG_i > 0]/𝔼_i[-EG_i | EG_i < 0] and positive rate (POS) ℙ[EG_i > 0] across all the orders in the dataset. All the above metrics, i.e., EG, ARR, GLR and POS, are better while value getting higher. Moreover, as a reasonable execution strategy should manage the cash resources wisely to avoid shortages, our last evaluation metric is the average percentage of time of conflict (TOC) where the agents conduct conflicted actions and suffer from short of cash during the execution, defined as 100%×𝔼[∑_t=1^T1(c_t = 0)] / T . TOC is better when the value is lower. Generally, a high TOC value indicates that the acquisition orders are often limited by the cash supply, which would usually results in suboptimal EG results. Note that, although the portfolio management strategy responsible to generate daily orders can hold more cash, i.e., allocating a larger initial cash balance c_0 for each set of orders to offer more budgets for acquisition and reduce TOC, a large cash position would lower the capital utilization and cause a lower profit rate. Thus, the initial cash budget c_0 would not be very large which requires multi-order execution to actively coordinates among liquidation and acquiring to well manage cash resources, i.e., achieving low TOC value. §.§ Experiment Results The detailed experiment results are listed in Table <ref>. We can tell that (1) our proposed methods, i.e., and , significantly improve the trading performance on the test environment compared to the other baselines and achieve the highest profits, i.e., EG, POS and GLR on all datasets. Also, the intention-aware communication mechanism brings a significant reduction in the TOC metric, which proves that sharing intended actions through communication offers much better collaboration performance than the previous MARL methods. (2) Almost all the MARL methods with multi-order optimization achieve higher profits and lower TOC than the RL methods optimized for single-order. It has illustrated the necessity of jointly optimizing multi-order execution and encouraging the collaboration among agents. (3) Although IS also shares the intentions of agents through communication, it achieves worse results than , indicating that the refinement of intended actions for multiple rounds within a single timestep is important for agents to reach good collaboration in a complicated environment. Also, intention communication in IS requires predicting future trajectories of all agents, which might not accurately reflect the true intention of other agents. Moreover, it suffers from large compounding error in noisy financial environments. (4) All the financial model based methods achieve TOC equal to zero since they do not actively seek best execution opportunity but mainly focus on reducing market impacts <cit.>, which may easily derive low TOC value yet low (poorer) EG performance. (5) There exists a huge gap between the performances of RL-based methods on China A-share market and US stock market. The reason may be the relatively larger daily price volatility in China stock market as shown in Appendix <ref>. §.§ Extended Investigation To further present the necessity of jointly optimizing multiple orders and the improvement in collaboration efficiency achieved by the proposed intention-aware communication mechanism, we investigate the statistics of the market data and compare the transaction details of our strategy and other baselines. The analysis in this section is based on the trading results on the test set of CHW1 and USW1 while the other datasets share similar conclusions. Collaboration is necessary when conducting multi-order execution We further clarify the necessity of collaboration for multi-order execution by illustrating the trading opportunity of acquisition and liquidation orders over the trading day. Figure <ref> illustrates the general price trend of all the assets by showing the average of price deviation at each minute Δp̂_t = E_t[p_t - p/p] as defined in Eq. (<ref>). It shows that, on average, there exists acquisition opportunity (lower price) at the beginning of the trading day, while the opportunities for liquidation (higher price) does not come until the middle of the trading day. Generally, the opportunities for liquidation come later than those of acquisition, which requires careful collaboration between buyers and sellers during execution and further call for multi-order optimization solutions as the fulfillment of acquisition orders depends on the cash supplied by liquidation. Multi-order optimization improves significantly against single-order execution We compare the execution details of IaC and PPO to show that jointly optimizing multiple orders is necessary to achieve high profit in multi-order execution. Figure (<ref>) shows how and PPO distribute the given order across the trading day on average. The bars exhibit the ratio of acquisition and liquidation orders fulfilled at every minute on average, i.e., 𝔼_t[q_t/M]. The hollow red bars show the number of orders buyers intend to fulfill, and the solid bars show what they actually trade considering the cash limitation. We find from Figure (<ref>) that PPO tends not to liquidate much at the beginning of the day, as it is not a good opportunity for liquidation, as shown in Figure (<ref>), leading to slow cash replenishment. Although PPO intends to buy many shares in the first 30 minutes of the day when the price is low, it is severely limited by a cash shortage. It has to postpone the acquisition operations and loses the best trading opportunities during the day. On the contrary, coordinates both acquisition and liquidation more actively and efficiently fulfills most liquidation orders in the first hour of trading thus guaranteeing sufficient cash supply for acquisition orders, which shows the improvement on collaboration brought by optimizing all orders simultaneously through our method. The intended actions have been gradually refined during intention-aware communication in To illustrate the refine process of intended actions, we directly take a_t,k, generated after the k-th round of communication (1 ≤ k ≤ K=3) from the trained policy of and , as the final actions at each timestep t, and evaluate them on the test sets. We report the profitability performance (EG) and collaboration effectiveness (TOC) of these intended actions in Figure (<ref>) to exhibit the collaboration ability of the agents after each round of action refinement. We can tell that (1) all intended actions achieve good TOC performance and reasonable EG, which reflects that even the intended actions proposed before the final actions have reflected the intentions of the agents thus subsequently offer clear information for better collaboration. (2) The EG gets higher and the TOC gradually reaches optimal while the agents communicate, indicating that the agents manage to improve their intended actions based on the intentions of each other for better collaboration. We also conduct case study on the transaction details of the intended actions after each round of communication in Appendix <ref> to further show the refinement of intended actions during our intention-aware communication. The intended actions have converged after multi-round communication in Figure (<ref>) shows the average difference between the intended actions of neighboring rounds with totally K=5 communication rounds. It indicates that the intended actions generally reaches convergence as the agents communicate for multiple rounds. Figure (<ref>) illustrates the influence of the total communication round number K on the EG performances of our method. The performances first improve sharply and then remain stable when K ≥ 3, which indicates that when intention-aware communication is sufficient (K≥ 3), additional communication would not bring significant improvements. These observations reflect the stability and robustness of our proposed method. § SOFTWARE FOR ORDER EXECUTION We developed a financial decision-making toolkit Qlib.RL based on Qlib <cit.>, to support order execution scenario. It offers APIs for receiving orders from upstream portfolio management systems and outputs detailed execution decisions to downstream trading program. Qlib.RL supports simultaneously execution of 1,000 orders on a machine with a NVIDIA P40 GPU and an Intel Xeon 8171M CPU, where all execution decisions would be given within 50 millisecond, which is significantly faster than the required decision time interval that is 1 minute in our practice. As for training, Qlib.RL retrains the policy every 2 months with the latest data and applies a rolling manner to maintain promising performance, which is an acceptable cost in this scenario. The corresponding codes and benchmark framework with data can be referred to https://seqml.github.io/marl4finhttps://seqml.github.io/marl4fin. § CONCLUSION AND FUTURE WORK In this paper, we formulate multi-order execution task as a multi-agent collaboration problem and solve it through an intention-aware communication method. Specifically, we model the intention of agents as their intended actions and allow agents to share and refine their intended actions through multiple rounds of communication. A novel action value attribution method is proposed to optimize the intended actions directly. The experiment results have shown the superiority of our proposed method. In the future, we plan to conduct joint optimization with order execution and portfolio management, and adapt our intention-aware communication methodology to wider RL applications. IEEEtran § DETAILED METHOD SETTINGS We provide descriptions of some essential settings of the MDP design and method implementation in this section. §.§ Statistics of datasets All the compared methods are trained and evaluated based on the historical data of China A-share stock market and US stock market. And we conduct a rolling window based data division for robust and fair evaluation. The detailed statistic of all datasets are presented in Table <ref> §.§ Details of state space The state _t ∈𝒮 describes the overall information of the whole system, and each agent can observe the private information related to the target asset of the corresponding order and some public information shared by all agents. The detailed contents of both parts are listed in Table <ref>. §.§ Multi-order execution procedure We describe the detailed multi-order execution procedure of each day in this section. The procedure is illustrated in Figure <ref>. The orders to execute, consisting of the assets, the amount to trade and the trading directions of the orders, are first given by an upstream portfolio management strategy. All orders should be fulfilled before the time horizon ends, i.e., within one trading day. Note that, we assume it possible for all orders to be fully fulfilled before the end of the time horizon, which is usually guaranteed by the portfolio management strategy. The trading day is divided evenly into T timesteps in total. For the i-th asset, at the beginning of timestep t, the state _t^i is generated by the environment and fed into the agent policy π^i, and the corresponding action a^i_t is proposed by the policy. The proposed volume to execute q^i_t = a^i_t M^i calculated based on action a^i_t will be executed at this timestep t. Also, the cash consumed by all acquisition operations and replenished by liquidation operations is calculated at each timestep to update the cash balance c_t. Specifically, in all of our experiments, we set the time for each period as 30 minutes, thus T=8 for China A-share stock market and T=13 for US stock market. Without loss of generality, following <cit.>, within each timestep, we use TWAP as a lower level strategy to conduct actual execution at each minute, in other words, we equally allocate some volume on every minute in timestep t from q^i_t. Note that, one can also replace TWAP with any other order execution strategies. §.§ Estimation of Additional Annualized Rate of Return We calculate additional annualized rate of return (ARR) brought by order-execution strategies relative to TWAP execution strategy under the same portfolio management strategy as evaluation. ARR is formally estimated as ARR≈ [ ( 1 + EG×daily turnover rate) ^(num. of trading days per year).                 . - 1 ] × 100% , where we take daily turnover rate as 10% and 250 trading days per year for both Chinese and US stock market. §.§ Hyper-parameter Settings In our experiment, we set the discount factor γ = 1, the coefficient for the penalties R_a^- and R_c^- in reward function defined in Eq. (<ref>) and Eq. (<ref>) are set to α=0.01 and σ=1/30, respectively, The hyper-parameters of all RL methods are listed in Table <ref>, together with the searching ranges within which we searched the best hyper-parameters on the validation set. §.§ Network architecture Some specific details of the network structure of the compared methods are explained here, including the extractor, the communication channel, the decision module and the value function estimator. Extractor E(·) As illustrated in Figure <ref>, the extractor network E is designed to extract the information in the observation of agent i into an embedding vector h_0^i. As the observation ^i consists of sequences of historical market information, we encode it with a temporal extractor, which is composed by a Gated Recurrent Unit <cit.> (GRU) and two fully-connect (FC) layers with ReLU λ(x) = max(0, x) as the activation function. The structure of extractor is shared by all RL-based compared methods. Communication channel C(·) As we mentioned before, our intention-aware communication method does not require specific form of communication channel and can be combined with arbitrary communication implementations in previous works. In this paper, we follow <cit.> and use fully connected network and self-attention module <cit.> as communication channel. Although graph neural networks (GNN) have been used as communication channels in recent works <cit.>, which is also suitable in our method, we did not utilize GNN as there is no clear graph structure between agents in multi-order execution. Nevertheless, GNN can also be incorporated with our methods in other scenarios. Decision module D(·) Receiving h^i after communication, a three-layer MLP with ReLU activation and Softmax σ(x)_j=e^x_j/Σ_m=1^Me^x_m as the last activation function is used to generate action distribution π^i = D(h^i). Value function We use the hidden representation h_0 generated by the extractor E(·) together with the actions a to estimate the action value. h_0 and a are fed into a three-layer MLP to derive the action value estimation Q̂(_t, a). To improve scalability and flexibility of order execution with various order number, all agents share the same network parameter with each other which is a common approach in many previous MARL works <cit.>. Each experiment has been conducted on the machine with an NVIDIA P40 GPU and an Intel Xeon 8171M CPU within 170 hours. And our proposed methods, with a much faster convergence speed, achieve the best performance within 35 hours of training, as shown in Appendix <ref>. § IN-DEPTH ANALYSIS §.§ Learning Analysis   We illustrated the learning situation of the compared methods on valid sets of CHW1 and USW1 in Figure <ref>. We can tell that, (1) with our intention-aware communication method, both and converge fastest achieving the highest reward among all the compared methods. (2) The convergence speeds of PPO and DDQN are quick, as they are optimized for single-order execution, which is less complicated thus easier to optimize than multi-order execution optimization. However, the final performances are limited and suboptimal due to lacking of collaboration. (3) The performance of IS has larger variance on multiple runs than TarMAC and CommNet, which might be the result of the unstable prediction of the imagined trajectories under noisy financial scenario. §.§ Influence of MDP settings We investigate the influence of MDP settings to the trading performance. We take action space 𝒜 as an example and test different action space settings for PPO, TarMAC and on CHW1 and USW1. Specifically, apart from the action space defined in Sec. <ref>, we conduct experiments on other two action space settings. The results are presented in Table <ref>. We can tell from the results that (1) on all these action space settings, achieves the highest EG and lowest TOC, which shows that the performance improvement brought by our method is consistent. (2) When the size of action space gets smaller, e.g., from {0, 0.25, 0.5, 0.75, 1} to {0, 0.33, 0.67, 1}, the strategies tend to achieve higher average EG performances. However, the performances get more unstable, and the TOC results get worse (higher). It is understandable as the agents have to trade more at one timestep, thus having a larger chance to trade much at a good opportunity of the day while suffering from large variance. In the meantime, the agents fail to conduct fine-grained execution and collaboration, resulting in worse, i.e., higher, TOC results. §.§ Influences of market situation There exists a significant gap between the EG results on China A-share stock market and US stock market for all the compared methods. We conduct analysis on the overall market situation of these two markets to explain this phenomenon. Specifically, we calculate the average volatility (AV) and average strength of momentum (ASM) of stock prices on each trading day as AV = 1/|D|∑_i=1^|D|std(p^i_t/p̃^i) , ASM = 1/|D|∑_i=1^|D|∑_t=1^T-1|p^i_t - p^i_t-1| , where std means standard deviation, |D| is the number of orders in the dataset and p̃^i is the average market price of asset i we defined in Eq. (<ref>). The statistics are shown in Table <ref>. We can see that both the ASM and the AV of datasets of China A-share market are larger than that of US stock market, indicating that the price movements on Chinese stocks tends to be larger and have more obvious trends, which makes it easier for RL policies to find good trading opportunities. §.§ Case study Figure <ref> illustrates the detailed trading situation of intended action after different rounds of communication of on USW1 in one trading day. It shows that the agents sacrifice the profit of liquidation orders a little, to trade earlier for cash supply to maximize the overall profit of both acquisition and liquidation. The agents manages to generally refine their actions and achieve better collaboration through multiple rounds of intention-aware communication. From Figure <ref>, the case study clearly presents the collaborative effectiveness of the learned policy, which has illustrated the efficacy of our proposed method.
http://arxiv.org/abs/2307.00819v2
20230703075741
A Data-driven Under Frequency Load Shedding Scheme in Power Systems
[ "Qianni Cao", "Chen Shen" ]
eess.SY
[ "eess.SY", "cs.SY" ]
1 [-25]A Data-driven Under Frequency Load Shedding Scheme in Power Systems Qianni Cao, Student Member, IEEE, Chen Shen, Senior Member, IEEE ========================================================================= Under frequency load shedding (UFLS) constitutes the very last resort for preventing total blackouts and cascading events. Fluctuating operating conditions and weak resilience of the future grid require UFLS strategies adapt to various operating conditions and non-envisioned faults. This paper develops a novel data-enabled predictive control algorithm KLS to achieve the optimal one-shot load shedding for power system frequency safety. Our approach yields a network that facilitates a coordinate transformation from the delay embedded space to a new space, wherein the dynamics can be expressed in a linear manner. The network is specifically tailored to effectively track parameter variations in the dynamic model of the system. To address approximation inaccuracies and the discrete nature of load shedding, a safety margin tuning scheme is integrated into the KLS framework, ensuring that the system frequency trajectory remains within the safety range. Simulation results show the adaptability, prediction capability and control effect of the proposed UFLS strategy. Koopman theory, UFLS, time delays, optimal emergency frequency control, parameter uncertainty. § INTRODUCTION §.§ Motivation Under frequency events in bulk power systems are generally caused by sudden large active power deficits, such as generator tripping or load surges. Restraining such frequency unsafty is essential for the secure operation of power systems. Under frequency load shedding (UFLS) constitutes the very last resort for preventing total blackouts and cascading events. The most standard and widely used strategy is to shed pre-assigned loads in discrete steps or blocks; each stage is implemented when the frequency is lower than a preset level of load shedding. In modern power systems, the uncertainty of renewable generation lead to ever-fluctuating operating conditions. The conventional UFLS strategy, which determines the load shedding amount based on anticipated operating conditions and faults, faces challenges in effectively working well under various operating conditions<cit.>. Therefore, it is necessary to develop UFLS schemes that adapt to various operating conditions and events, decide the optimal load shedding amount to promise the hard limits on the frequency trajectories. In this paper, an online data-enabled UFLS scheme is designed for emergency frequency control. With the frequency prediction capability under various operating conditions and power imbalances, it leverages online input/output measurements to achieves safe and optimal online control with minimal one-shot load shedding, instead of shedding loads at multi stages, thereby speeding up the recovery of system frequency. §.§ Literature Review There have been abundant works on frequency control. A challenging problem is optimal frequency trajectory tracking, where a control policy should drive a dynamical system within safety range while minimizing load shedding amount. A general solution is to formulate control problems as optimization problems. The key ingredient for optimal control problem is an accurate parametric state space model of the system. A standard solution is utilizing classical representations of system dynamics, such as the swing equation and the first-order PFR dynamics. These methods require prior knowledge of system parameters<cit.>, or the need to know the power deficit in the system when faults occur <cit.>. However, parameters such as inertia have become difficult to obtain accurately due to the high penetration of power electronics converter-interfaced devices <cit.>. Moreover, the power deficit in the system cannot be measured directly in practical power networks. Although some methods such as <cit.> use data-driven approaches to estimate these parameters, the accuracy of the model itself may be limited. For example, the traditional SFR model does not explicitly consider the impact of load-frequency dependence <cit.>. As power systems are becoming more complex and data is becoming more readily available, it is necessary to develop methods that uses only input/output data measured from the unknown system to predict the future trajectories <cit.>. Since nonlinear system dynamics renders the optimal control problem intractable, it is desirable that the learned system model linear. Koopman-based control framework <cit.> was proposed to provide linear representations of system dynamics in the context of black-box systems. Fluctuating operating conditions and weak resilience of the future grid require frequency control strategies adapt to various operating conditions and non-envisioned faults. Achieving generalization capability is one of the key challenges in data-driven control. Otherwise, for anticipated operating conditions and faults, control strategies can be devised through offline pre-decision making and online matching, thereby diminishing the significance of data-driven control. However, there is no guarantee that a system model trained for specific predetermined scenarios generalize to data outside of the distribution of the training set <cit.>. Hence, it is of utmost importance to enhance the generalization capability of data-driven control. The extraction of model parameter information from the training set, obtained under predetermined operating conditions and envisioned fault scenarios, is critical. For data-driven control based on the identified system, the generalization capability relies on the prediction capability of the identified system. In other words, Koopman linear representations should be able to track parameter variations in the original parametric state space system model using measurements. Since Koopman operator is infinite-dimensional, finite Koopman invariant subspace is often approximated to gear toward prediction and control. However, representation errors of Koopman operators are inevitable <cit.>. Inaccurate estimations and predictions and may degrade the quality of the obtained optimal control. In Ref. <cit.>, the impact of representation errors of Koopman eigenpairs on the control performance of LQR controllers was evaluated. The adaptive nature of closed-loop control theoretically allows one to compensate for modeling discrepancies and to account for disturbances<cit.>. However, in practice, the control decision-making in power systems relies on simulation models. To mitigate the risks associated with inaccurate simulation model parameters and algorithm limitations, closed-loop strategies involving load shedding measures are rarely implemented <cit.>. Therefore, it is necessary to address the potential impact of representation errors on open-loop control. Moreover, in most load shedding schemes, it is often assumed that the load shedding amount at each bus can be a continuous value<cit.>. This assumption overlooks the fact that load shedding is accomplished by shedding candidate feeders. In practical engineering, the feasible amount of load shedding is restricted to discrete values, with each discrete interval representing the load associated with a feeder. Although the optimal control problem can be formulated as an MILP problem to decide whether to shed a feeder or not <cit.>, solving MILP is an NP-hard problem, making it difficult for online control. Therefore, Ref.<cit.> decides the control inputs at a stage beforehand and then rounds it to the nearest discrete value. However, this leads to a discrepancy between the actual and optimal shedding amounts, which affects the dynamic behavior of the system after control. Consequently, when designing the control policy, it is essential to explicitly consider the potential effects of the discrepancy and ensure that the dynamical system remains within the safety range while minimizing the economic losses caused by load shedding. §.§ Contribution According to the literature review, to address the frequency stability issue in hybrid AC/DC power grids with high penetration of renewable energy sources, we face three main challenges: 1) system identification for optimal control under unanticipated operating conditions and power imbalance; 2) analyzing the system performance when applying control policies computed from a inaccurate Koopman representations; 3) designing a control strategy that works with discrete feasible load shedding values. In response to the aforementioned research gaps, the main contributions of this study are outlined as follows: * We introduces a novel data-enabled predictive control algorithm, referred to as KLS, that achieves optimal one-shot load shedding for power system frequency safety. The KLS algorithm demonstrates adaptability to diverse operating conditions and under frequency events, allowing for precise load shedding strategies. * We investigate how approximation inaccuracies in the Koopman linear representations influence the control strategies and controlled frequency trajectories. * By formulating a safety margin tuning scheme within the framework of KLS, we ensure that the system frequency trajectory remains within the prescribed hard limits when approximation inaccuracies exist and when the feasible amount of load shedding is restricted to discrete values. The rest of this paper is organized as follows. Section II proposes the KLS strategy for under frequency events. Section III investigates how approximation inaccuracies in the Koopman linear representations influence the control effect and a safety margin tuning method is proposed. In Section IV, an CIGRE-LF system case is presented and the effectiveness of the proposed control strategy is verified. Section V provides the conclusion. § EMERGENCY FREQUENCY CONTROLLER DESIGN In this section, we design a deep neural network to learn a coordinate transformation from the delay embedded measurement space into a new space where it is possible to represent the dynamics linearly. An optimal control problem is then formulated to solve the one-shot load shedding amount. Let an autonomous nonlinear dynamical system be governed by x_t+1=f(x_t,y_t) where t=1,2,...T, T is the prediction horizon, x∈R^n_x is the state, y∈R^n_y denotes algebraic variables and f(·) is a nonlinear function. Considering the computational inefficiency of calculating optimal control for high-dimensional nonlinear dynamical functions, Koopman theory <cit.> provides a perspective that nonlinear dynamics can be represented in terms of an infinitedimensional linear operator acting on the space of all possible measurement functions of the system. Even if the function f(·) is unknown, it is still possible to estimate the Koopman operator using the system's measurements. However, the estimation of the Koopman operator relies exclusively on data, either numerical or experimental. In the context of power systems, it is common practice to rely on numerical data acquired from simulations. When collecting the dataset, it is necessary to preset the operation conditions and emergency events to trigger the system dynamics. Diverse operational conditions lead to variations in the parameters of the grid state space model given in Eq.(<ref>), and also, the Koopman linear representations<cit.>. Thus, a significant challenge in linear predictive system modeling is adaption to different operating conditions and faults in the training set. Although it is feasible to incorporate a wide range of operation conditions and events within the training set, it is impractical to exhaustively account for every scenario. Therefore, the linear representation should also possess the ability to generalize, allowing the linear dynamic model of the system, which is trained for specific predetermined scenarios, to extend its prediction capability to systems operating under non-predefined conditions and faults that are not included in the sample set. In order to explicitly represent the variations of operating conditions and the complexity of emergency events in the state space grid model Eq. (<ref>), we utilize a vector of variables m to represent a subset of uncertain model parameters that are challenging to obtain online. To incorporate the uncertainty of these parameters into the system model, a modified model is evaluated as x_t+1=f(x_t,y_t,m_t) m_t+1=h(x_t,y_t,m_t) Compared with Eq.(<ref>), Eq.(<ref>) provides a more general form of a deterministic power system model which accounts for uncertainties. In Eq.(<ref>), x_a^⊤=[x^⊤ m^⊤] can be defined as pseudo-state variables. The augmented model was first introduced in Ref. <cit.>. Remark 1: Many model parameters are typically assumed to be time-invariant, although their initial values may vary across different data samples, such as system inertia. There also exist time-varying parameters, but their temporal variations can be captured using discrete-time dynamic equations, as presented in Eq. (<ref>), for instance, the power deficit in the system. Since m are hard to measure, which constitutes hidden, or latent variables that are not directly measured but are dynamically important. Thus, the challenge of adapting the linear representations to accommodate the parameters variations transforms into the challenge of accounting for the hidden variable in the model. Time-delay embedding provides an approach to augment these hidden variables, and under certain conditions, given by Takens’ embedding theorem <cit.>, the delay-augmented state yields an attractor that is diffeomorphic to the underlying, though unmeasured, full-state attractor <cit.>. Here, we design a deep neural network to learn a coordinate transformation from the delay embedded space into a new space where it is possible to represent the dynamics in a linear form and also to track the parameter variations in the system with input/output data. The deep neural network is illustrated in Appendix <ref>. The data set collected for training the network is described as follows. For a given power system, a specific anticipated operating condition α∈𝒜 is considered, and a representative fault β∈ℬ is introduced. 𝒜 represents a predefined set of typical operating conditions. ℬ denotes a predefined set of typical faults. Additionally, load shedding amount u=[u_1,...u_i,...,u_I] ∈𝒰 are defined at each load node i, where u_i(i=1,2,...I) (in per unit, where the load level at bus i is the base value for u_i) is a uniformly distributed random number between 0 and 1. Subsequently, time-series data of the system's inertia center frequency (referred to as the system frequency hereafter) Ω={ω^α,β|α∈𝒜,β∈ℬ} are collected at time points t=1,2,...,T, resulting in a sequence of data [ω _1^α ,β,ω _2^α ,β,...,ω _T^α ,β]. The obtained Ω and u∈𝒰 are utilized as training data for the linear prediction model. Details for the network architecture and the loss function are given in Appendix <ref>. The latent extraction layers in the network are specifically designed to monitor variations in the parameters of the state-space model. Based on Koopman theory, we assuming that the frequency dynamics of the system is governed by the linear dynamic system equation represented in Eq.(<ref>). g_t+1=Ag_t+Bu_t where g_t=[ ω_t φ (ω_t-τ :t,y_t-τ :t) ] where ω_t is the deviation of frequency from its nominal value (in per unit) at time t, ω_t-τ :t=[ω_t-τ,ω_t-τ +Δ t,...,ω_t],y_t-τ :t=[y_t-τ,y_t-τ +Δ t,...,y_t] represent the time series of system frequency (state variable) and voltages (algebraic variables), respectively. ψ denotes a neural network with a prescribed activation function and connectivity structure. With the loss function defined in Appendix <ref>, it is feasible to train the parameters of φ, as well as the matrices A and B. With the known parameters of ψ, A, and B, one can utilize Eq. (<ref>) to predict the future trajectory of the system frequency variation given ω_1^α,β and u. Herein, we refer to the dynamical system described by Eq. (3) as a Koopman linear system. Given the inevitable presence of training errors in the parameters of ψ, A, and B, the frequency trajectory of the Koopman linear system may deviate from the actual trajectory. For the purposes of clarity, we denote the former as [ω̅_2^α,β,...,ω̅_T^α,β] . Remark 1: The time intervals in the time points t=1,2,...,T may not be consistent with the time intervals in t-τ,t-τ+Δ t,...,t. In Section <ref>, the time intervals in t=1,2,...,T are set to 1 s, while in t-τ,t-τ+Δ t,...,t, Δ t is set to 1 ms. §.§ Koopman-Operator-Based Emergency Frequency Control Strategy Combined with the Koopman model predictive control proposed in Ref. <cit.>, the optimal shedding amount is obtained by solving the following optimal control problem. min   u^TRu  s.t. ω̅_t≥ω_min, ∀ t ∈ T ω̅_T≥ω_∞min g_t+1=Ag_t+Bu_t, ∀ t ∈ T where R represents a positive definite matrix used to represent the cost of load shedding at each bus, ω̅_t system frequency at time t predicted by Koopman linear system, ω_min is the maximum allowed frequency deviation, ω_∞min is the maximum allowed steady-state frequency deviation. Here, we assume that the prediction length T is sufficiently long for the system frequency to reach a steady state by time T. The optimal control problem Eq.(<ref>) is a quadratic programming problem with R being a positive definite matrix. This problem can be solved in polynomial time. Remark 1: The values of ω_min and ω_∞min can be adjusted based on the interplay between data-driven load shedding and the traditional UFLS scheme. The traditional UFLS scheme, as it initiates load shedding after a certain deviation in system frequency occurs (e.g., when the system frequency drops to 49Hz), may lead to larger system faults due to the delayed timing of load shedding, resulting in greater power deficits and consequently higher load losses. If the objective of data-driven load shedding is to avoid triggering the traditional UFLS scheme, ω_min can be set to 49.0Hz. On the other hand, if the data-driven load shedding aims to fully replace the traditional UFLS scheme and ensure that the system's minimum frequency remains above the minimum operating frequency of synchronous generators (e.g., 47Hz), ω_min can be set to -3Hz. In Section <ref>, we choose ω_min as 49.0Hz as an illustrative example to show the effectiveness of KLS. In practice, continuous adjustment of load shedding is difficult to achieve, and it is often necessary to choose whether or not to shed a load on a particular feeder line, resulting in a series of discrete values for the actual load shedding. Let the optimal load shedding obtained by solving the optimal control problem Eq.(<ref>) be denoted as 𝐮̅_*=[u̅_1*,...u̅_i*...,u̅_I*] the actual load shedding amount would then be given as Q_d( u̅_i*)={ nd nd≤u̅_i*<n( d+0.5 ) n( d+1 ) n( d+0.5 )≤u̅_i*<n( d+1 ) . where d represents the quantization interval, which physically refers to the load shedding amount on a single feeder line, n represents a positive integer, and Q_d(u̅_i∗) denotes the actual load shedding amount at each load node i when the discrete interval is d. Solving the optimal control problem in Eq. (<ref>) and rounding the resulting solution as described in Eq. (<ref>) is referred to as Koopman-based load shedding strategy (KLS). Remark 1: An alternative approach is to round the computed control quantity to the nearest larger value, as outlined below. Q_d( u̅_i*)= n( d+1 ) nd< u̅_i*≤ n( d+1 ) However, the strategy in Eq.(<ref>) results in over-shedding. Although, given the same system linear representation, the strategy presented in Eq.(<ref>) with a larger load shedding amount compared to Eq.(<ref>) is more likely to ensure that the system frequency does not violate safety constraints. Nevertheless, when implementing the strategy described in Eq.(<ref>), it is also possible to ensure the safety of the system frequency by tuning a safety margin in the constraints of Eq.(<ref>). The design of the safety margin in KLS will be presented in Section <ref>. By avoiding the rounding of the optimal load shedding amount to the nearest larger value, we achieve a smaller amount of load shedding while maintaining system frequency safety (this will be demonstrated in Section <ref>). § ERROR ESTIMATION AND SAFETY MARGIN DESIGN §.§ Impact of Koopman representation errors on control effects In Section <ref>, we employed constraints in the optimal control problem to ensure that the frequency in the Koopman prediction model remain above the acceptable minimum values. However, in actual power systems, the optimal load shedding obtained from Eq. (<ref>) may violate the prescribed hard limits. Therefore, in this section, we first analyze some of the main reasons for the failure of the optimal strategy obtained from Eq. (<ref>) . Secondly, we propose adding a safety margin to the frequency limits in constraints of Eq. (<ref>). An explicit approach is introduced for calculating the safety margin. By integrating the safety margin, the optimal control strategy derived from KLS effectively ensures that the system frequency complies with the prescribed hard limits in the actual system. Despite attempts to develop the accurate system dynamics, there are inevitable representation errors of the finite-dimensional approximation of the Koopman operator, manifesting as minor prediction errors. This analysis aims to determine whether even slight deviations between the obtained and actual Koopman linear dynamics can compromise the desired system properties, potentially violating the imposed hard limits. Assume that g̅ and (A̅,B̅) can be identified from data, and that the model errors are bounded in terms of the infinity induced norm as Δ_A≤ε_A,Δ_B≤ε_B where Δ_A=A̅-A, Δ_B=B̅-B. By applying the definition of (A̅,B̅), modelled system dynamics can be transformed into g coordinates: g_t+1=Ag_t+Bu_t+β(x_k) +ε_g(Ag(x_t)-g(x_t+1)) Define γ(x_t+1)=β(x_t)+ε_g(Ag(x_t)-g(x_t+1)). Further, we assume the disturbance is norm-bounded by: γ(x_t)={γ∈ℝ^n|γ≤ε_γ} Define ℛ=[ R_0,0; R_1,1 R_1,0; ⋮ ⋱ ⋱; R_T,T ⋯ R_T,1 R_T,0 ] where R_i,j is a matrix of compatible dimension. We denote the set of such matrices by ℒ^T,p× q. R_i,j denoted the i-th block column of ℛ, both indexing from 0. For dynamics in Eq.(<ref>) and a fixed horizon T, we define 𝒢(x) and u as the stacked states and inputs up to time T, i.e., 𝒢^⊤(x)=[ g^⊤(x_0) g^⊤(x_1) ⋯ g^⊤(x_T) ] u^⊤=[ u_0^⊤ u_1^⊤ ⋯ u_T^⊤ ] γ^⊤=[x_0^⊤ γ_0^⊤ ⋯ γ_T-1^⊤] where u_0=u_1=⋯=u_T for one-shot load-shedding. We can concatenate the dynamics matrices as 𝒜=blkdiag(A,⋯,A,0),ℬ=blkdiag(B,⋯,B,0). Note that we embed x_0 as the first component of the disturbance process. Based on System Level Synthesis (SLS) <cit.>, we have a direct optimization over system responses ϕ_g,ϕ_u defined as [ 𝒢^⊤(x) u_* ]= [ ϕ_g ϕ_u]γ where ϕ_g∈ℒ^T,n× n,ϕ_u∈ℒ^T,m× n are two block-lower triangular matrices. It has been proved that for any {ϕ_g,ϕ_u}∈ℒ satisfying [𝕀-Z𝒜 -Zℬ][ ϕ_g ϕ_u]=𝕀 the controller ϕ_uϕ_g^-1∈ℒ achieves the desired response, where Z is the block-downshift operator, i.e., a matrix with the identity matrix on the first block subdiagonal and zeros elsewhere. For the identified model 𝒜=blkdiag(A,⋯,A,0) and ℬ=blkdiag(B,⋯,B,0), the block-lower triangular matrices {ϕ_g,ϕ_u} satisfying: [𝕀-Z𝒜 -Zℬ][ ϕ_g ϕ_u]=𝕀 By rewriting Eq.(<ref>), we can obtain [𝕀-Z𝒜 -Zℬ] ϕ =𝕀-ZΔϕ where Δ=Z[Δ_𝒜 Δ_ℬ], ϕ^⊤=[ϕ_g^⊤ ϕ_u^⊤] and Δ_𝒜,Δ_ℬ are block diagonal matrixs satisfying Δ_𝒜=𝒜-𝒜, Δ_ℬ=ℬ-ℬ. The response of the system (𝒜,ℬ) with the controller ϕ_uϕ_g^-1 is given by [ 𝒢^⊤(x) u_* ]=(ϕ+ϕΔ(𝕀-ϕΔ)^-1ϕ)γ We decompose ϕ,Δ, and γ as follows to separate the effects of the known initial condition x_0 from the unknown future disturbances γ_0:T-1: ϕ=[ ϕ_g ϕ_u]=[ϕ^0 | ϕ^γ] γ=[ x_0 γ], Δ= [ Δ^0 Δ^γ] where ϕ^0 is the first block column of ϕ, Δ^0 is the first block row of Δ. Then we have [ 𝒢(x) 𝐮_*] -[ 𝒢(x) 𝐮_*] =(ϕ-ϕ)γ+ϕ^γΔ^γ(𝕀-ϕ^γΔ^γ)^-1ϕγ where [𝕀-Z𝒜 -Zℬ](ϕ-ϕ)=Δϕ=ϕ^γΔ^γ and (𝕀-ϕ^γΔ^γ)^-1=∑_k=0^k(ϕ^γΔ^γ)^T where Δ^γ≤ε_A+ε_B. Therefore, it can be concluded that when ε_A and ε_B converge to 0, [ 𝒢^⊤(x) 𝐮_*^⊤ ]^⊤-[ 𝒢^⊤(x) 𝐮_*^⊤ ]^⊤ converges to 0. In other words, the error of the open-loop dynamics is limited by the representation errors of Koopman eigenpairs. §.§ Safety margin tuning Despite the minor discrepancy between the optimal control strategy derived from Eq. (<ref>) and the true optimal load shedding quantity, the subsequent rounding to the nearest feasible value can yield divergent feasible values, resulting in a discrepancy denoted as d, representing the quantization interval. Even though there is only a small error between the optimal control strategy obtained from Eq. (<ref>) and the actual optimal amount of load shedding, but after rounded to the nearest feasible value, they may be rounded to different feasibles values, which causes a discrepancy of d, which is the quantization interval. Consequently, it's important to revise the optimal control strategy in Eq.(<ref>) to ensure that the system frequency trajectory remains within the prescribed hard limits. In this section, we further design the constraints in Eq. (<ref>) to prevent the rounded optimal control strategy from causing the system frequency to exceed the prescribed hard limits. Proposition 1. By replacing the frequency limits in Eq. (<ref>) with Eq. (<ref>), it ensures that the optimal control strategy obtained from KLS, when rounded to the nearest value, does not violate the prescribed hard limits on the system frequency. ω̅_t≥ω_min+ζ where ζ satisfies ζ ≥C∑_k=0^t-1A^(t-1)-kBd/2 +α∈𝒜,β∈ℬ,u∈𝒰max ω̅_t^α ,β(u)-ω_t^α ,β(u) where C is an observation matrix with the first element equal to 1 and the remaining elements equal to 0. Proof. The constraints in the optimal control problem Eq. (<ref>) guarantee that the minimum value of ω̅_t^α ,β(u̅_*^α ,β) is no less than ω_min+ς, and the steady-state value is no less than ω_∞min+ς. Therefore, Therefore, it is crucial to find an upper bound on the difference between ω̅_t^α ,β(u̅_*^α ,β) and ω_t^α ,β(Q_d(u̅_*^α ,β)) in order to determine the value of ζ. The estimation of this upper bound is given as follows. ω̅_t^α ,β(u̅_*^α ,β)-ω_t^α ,β(Q_d(u̅_*^α ,β)) ≤ω̅_t^α ,β(u̅_*^α ,β)-ω̅_t^α ,β(Q_d(u̅_*^α ,β)) +ω̅_t^α ,β(Q_d(u̅_*^α ,β))-ω_t^α ,β(Q_d(u̅_*^α ,β)) ≤C∑_k=0^t-1A^(t-1)-kBd/2 +ω̅_t^α ,β(Q_d(u̅_*^α ,β))-ω_t^α ,β(Q_d(u̅_*^α ,β)) where u̅_*^α,β represents the optimal load shedding solution obtained by solving Eq. (<ref>), while Q_d(u̅_*^α, β) denotes the actual load shedding amounts at each load node i, ω̅_t^α, β(u̅_*^α ,β) corresponds to the predicted frequency of the linear prediction system at time t when the load shedding amount is u̅_*^α ,β. ω_t^α ,β(Q_d(u̅_*^α ,β)), and ω̅_t^α ,β(Q_d(u̅_*^α ,β)) respectively represent the actual and predicted system frequency at time t when the load shedding amount is Q_d(u̅_*^α ,β). The values of ω_t^α ,β(·) can be obtained through power system simulation, while the values of ω̅_t^α ,β(·) can be calculated using the linear prediction system Eq.(<ref>). The proof for the second inequality in Eq.(<ref>) is as follows. ω̅_t(𝐮̅)- ω̅_t(Q(𝐮̅)) =CA̅^tg(x_0)+C∑_k=0^t-1A̅^(t-1)-kB̅u_t -CA̅^tg(x_0)-C∑_k=0^t-1A̅^(t-1)-kB̅Q(u_t) =C∑_k=0^t-1A̅^(t-1)-kB̅(u_t-Q(u_t)) ≤C∑_k=0^t-1A̅^(t-1)-kB̅d/2 The safety margin ζ is expected to ensure that the KLS guarantees the system frequency to remain within the safe range under anticipated operating conditions α∈𝒜 and faults β∈ℬ. Therefore, parameter ζ must satisfy Eq. (<ref>). ζ = C∑_k=0^t-1A^(t-1)-kBd/2 +α ,βmax ω̅_t^α ,β(Q_d(u̅_*^α ,β))-ω_t^α ,β(Q_d(u̅_*^α ,β)) Therefore, when the inequality Eq. (<ref>) holds, it ensures that the frequency trajectory of the actual system is within the safety range, under the load shedding amount Q_d(u̅_*^α ,β). ▪ Remark 1: Comparing Eq.(<ref>) and Eq.(<ref>), the maximum frequency prediction error of the Koopman linear system on the training and testing sets is employed as an estimate of ω̅_t^α ,β(Q_d(𝐮̅_*^α ,β))-ω_t^α ,β(Q_d(𝐮̅_*^α ,β)). This estimation is valid when there is an insignificant correlation between the frequency prediction error of the Koopman linear system and d. Given that u_i follows a uniform distribution in the training set, we deduce the presence of this circumstance. Remark 2: Although manual tuning of ζ is possible, for instance, by calculating the system frequency after rounding the optimal control strategy to the nearest value for each operational condition and fault in the training set and selecting samples where the system frequency fails to meet the hard limits, Proposition 1 offers an analytical approach for tuning of ζ. This approach eliminates the need to compute the system frequency after rounding the optimal control strategy for each sample in the training set and avoids the extensive experimentation required to find suitable values of ζ. Instead, it relies on A, B in Eq.(<ref>), and the prediction error already computed during the training of the encoder, thus enhancing the efficiency of ζ design. Here, we further discuss when does the equality holds in the inequality (<ref>). In the deviation of the upper bound for ω̅_t^α ,β(u̅_*^α ,β)-ω_t^α ,β(Q_d(u̅_*^α ,β)), the first inequality in (<ref>) and the last inequality in (<ref>) are based on the sub-multiplicative inequality and the triangle inequality of the Frobenius norm, respectively. For any two arrays 𝒜 and ℬ, equality for the triangle inequality holds when the two arrays are linearly dependent, while equality for the sub-multiplicative inequality holds if and only if each row of 𝒜 and each column of ℬ are linearly dependent. ω̅_t^α ,β(u̅_*^α ,β)-ω_t^α ,β(Q_d(u̅_*^α ,β)) is often strictly lower than the upper bound derived in Eq.(<ref>). The reason is the equality conditions of the triangle inequality and the sub-multiplicative inequality in (<ref>) and (<ref>) do not necessarily hold. The gap between ω̅_t^α ,β(u̅_*^α ,β)-ω_t^α ,β(Q_d(u̅_*^α ,β)) and its upper bound will be further illustrated in the simulation results in Section <ref>. § CASE STUDY In this section, the effectiveness of KLS is illustrated by a case study on the CloudPSS platform <cit.>, <cit.>. All of the following tests are conducted on PCs with Intel Xeon W-2255 processor, 3.70 GHz primary frequency, and 128GB memory. §.§ Test System and Datasets To validate the effectiveness of our proposed method, we conducted simulation experiments the CIGRE-LF test system, which is adapted from a real provincial power grid in China. The CIGRE-LF consists of 102 500-kV buses, and possesses a load level of 2600 MW, with installed capacities of 2400 MW and 5400 MW for renewable and conventional energy sources, respectively. The full electromagnetic transient (EMT) model of the test system is built on the CloudPSS platform <cit.>. Based on the synchronous generators' model, system inertia is an important parameter that affects frequency safety. The influence of operating conditions on frequency dynamics can be attributed, in part, to the variations in system inertia caused by changes in operating conditions. Therefore, in this section, we assume a certain level of randomness in system inertia to capture unanticipated operating conditions. The training and testing sets are generated as follows. We assume that the parameter values of inertia for each generator follow a uniform distribution, with the mean being the original manufacturer data and a deviation of 30% of the mean value to account for parameter uncertainties. We assume that u_i at bus i is a uniformly distributed random number between 0 and 1. The fault set of the system is generated by traversing N-1, N-2, and N-3 faults. The training set consists of 600 frequency trajectories, while the testing set consists of 300 frequency trajectories. Each frequency trajectory has a length of 1 min. When generating a frequency trajectory, inertia for each generator, u_i, and faults are randomly generated according to their respective distributions. Since u_i and the inertia for each generator are continuous variables, the probability of having the same u_i and inertia values for two frequency trajectories is small. Therefore, we assume that there is no intersection between the training and testing sets. §.§ Adaptability Fig.<ref> demonstrates that KLS is capable of extracting latent variables strongly correlated with the inertia rate and power imbalance from the frequency trajectory within a 300 ms time window after a fault occurs. Specifically, Latent1 and Latent2 exhibit a remarkably high correlation with the inertia rate and power imbalance, respectively, providing evidence that Koopman linear representations are capable of capturing parameter variations in the original state space system model using measurements. This result validates the adaptability of KLS to diverse operating conditions and faults. Furthermore, as the results presented in Fig.<ref> are based on the training set, it confirms that KLS is able to adapt to unanticipated operating conditions and faults. §.§ Prediction Capability To illustrate the capability of KLS to learn the dynamics of system frequency from the online measurement, the frequency measurement of last 1 min is used to fit the linear model in Eq.(<ref>) for the system with different inertia, control inputs and faults. As the benchmark of the learning algorithm, the DMD and EDMD method is implemented with 100 radial basis functions (RBF) as observables. KLS without time delay embedding (KLS-NTD, i.e., when τ=0 in Eq.(<ref>)) is also tested as a benchmark. The strong nonlinearity of dynamics makes the local linearization method hard to fit the model accurately in global horizon, and the piecewise linearization has difficulty in remaining a balance between accuracy and simplicity. Therefore, only Koopman operator related algorithms are compared. In subsequent discussions, we refer to KLS-NTD, DMD, and EDMD as the state-of-the-art algorithms (SOTAs) for brevity and clarity. Fig.<ref>(a) demonstrates the global linearization capability of the proposed methods. The pink line represents the simulated dynamic process, while the blue and green lines represent the predicted results obtained from EDMD and DMD, respectively. The orange line corresponds to the predicted results obtained from KLS-NTD. Based on the frequency sequence observed within 300ms after the generator tripping at 40s, KLS demonstrates accurate prediction of the evolving frequency for the subsequent 60 seconds under different control inputs. This highlights the effectiveness of the latent extractor combined with time-delayed measurements in capturing the dynamics of the system frequency. In contrast, DMD and EDMD exhibit poorer performance, which can be attributed to their limited capability in incorporating time-delay information and harnessing the powerful non-linear representation offered by deep learning techniques. The prediction accuracy of KLS and SOTAs on the training and testing sets is illustrated in Fig.<ref>. The average absolute error (MAE) is employed as a measure of prediction accuracy. Due to the capability of KLS to track parameter variations in the original state space system model, the prediction accuracy is consistently high on both the training and testing sets. For frequency control problems, the frequency nadir and steady-state value are two important indicators for assessing frequency safety. In order to demonstrate the prediction accuracy of KLS for the frequency nadir and steady-state value, we set all the inputs u in the test dataset to zero and re-simulated the true frequency trajectory. Subsequently, we computed the predicted frequency trajectory using KLS. The prediction accuracy for both indicators is shown in Fig. <ref> and Fig. <ref>. It can be observed that the MAE of the predicted frequency trajectories for 95% of the test dataset is within 0.1%. §.§ Control Effect In this subsection, we focus on investigating the impact of prediction accuracy on control effectiveness. Therefore, it is assumed that a continuous adjustment of load shedding can be achieved, and the safety margin ζ is set to zero. The effectiveness of KLS and SOTAs is evaluated by assessing the system's frequency safety after the implementation of these control strategies. Furthermore, to demonstrate the adaptability of KLS to unanticipated operating conditions and faults, all results in this subsection are computed using the test dataset. The normalized safety metric is calculated as follows: Safety=α(Nadir^*-Nadir_0/Nadir_1-Nadir_0)+β(SSV^*-SSV_0/SSV_1-SSV_0) In Eq.(<ref>), when the frequency nadir is greater than Nadir1 and the steady-state frequency is greater than SSV_1, Safety is assigned a value of 1; when the system's frequency nadir Nadir^* is less than Nadir_0 and the steady-state frequency SSV^* is less than SSV_0, Safety is assigned a value of 0. The weights for measuring the safety indicators of nadir and steady-state value are represented by α and β, respectively. In this paper, the values of Nadir_1 and SSV_1 are set to 49.0Hz and 49.5Hz, respectively, which are equal to ω_min and ω_∞min. Nadir_0 and SSV_0, and SSV_1 are set to 48.5Hz and 49.0Hz, respectively. The values of α and β are both set to 0.5. The average control safety metrics of KLS, KLS-NTD, EDMD, and DMD on the test dataset are depicted in Fig.<ref>. T-tests are conducted to compare the effectiveness of different control strategies. The symbols *, **, ***, **** represent t-test results with significance levels less than 0.1, 0.01, 0.001, and 0.0001, respectively. Among the 300 test data, the proportions of Safety exceeding 0.9 for these methods are 97.5%, 49.3%, 41.6%, and 4% respectively. KLS, with its superior predictive accuracy compared to SOTAs, enhances the system's frequency safety. However, without incorporating a safety margin into the frequency constraints of the optimal control problem Eq. (<ref>), the average control safety metric of KLS fails to reach a value of 1. In Fig.<ref> and Fig.<ref>, approximately 2.5% of the test data exhibit prediction errors exceeding +0.05Hz. Furthermore, KLS tends to overestimate the frequency nadir and steady-state frequency for these 2.5% of the test data, resulting in relatively conservative control measures. As a result, around 2.5% of the post-control frequency trajectories fall below the safety metric threshold of 0.9. Considering the feasible load shedding amounts and the discrepancy from the optimal control obtained through Eq.(<ref>), further deterioration of KLS's Safety would occur. This observation underscores the necessity of incorporating a safety margin in the frequency constraints of the optimal control problem. §.§ Safety Margin This subsection analyzes the control effectiveness after the introduction of a safety margin in Subsection <ref>. The evaluation of control measures is based on two indicators: the system's frequency safety and the control cost. Increasing the amount of load shedding typically results in a higher system frequency. If the system frequency remains within the safe range (i.e., Safety = 1), the greater the deviation of the nadir and the steady-state value of the system's frequency from the specified hard limits, the higher the associated control cost becomes. To tackle this issue, a normalized economic metric Economy, ranging from 0 to 1, is introduced as an indicator to measure the control cost. Economy=min(Nadir^*-Nadir_0/Nadir_1-Nadir_0,SSV^*-SSV_0/SSV_1-SSV_0) where Nadir_1=49.0Hz and SSV_1=49.5Hz indicate that when the nadir is equal to 49.0Hz and the steady-state value is equal to 49.5Hz, the Economy of load shedding measures is assigned a value of 1. Similarly, Nadir_0=49.5Hz and SSV_0=50.0Hz indicate that when the nadir is greater than 49.5Hz and the steady-state value is greater than 50.0Hz, the Economy of load shedding measures is assigned a value of 0. Fig. <ref> presents the improvement in Safety when incorporating the safety margin calculated using Eq.(<ref>) at different values of d (where d takes values of 10MW, 25MW, and 50MW). It can be observed that the introduction of the safety margin enhances the system's frequency safety for different load levels associated with a feeder. Moreover, as d increases, the improvement in Safety becomes more significant. For the sake of clarity, we will refer to the control strategy in Eq. (<ref>) as the ceiled KLS (KLS-C). The Economy of KLS-C and KLS is demonstrated in Fig. <ref>. It can be observed that by avoiding the rounding of the optimal load shedding amount to the nearest larger value, KLS achieves a reduced amount of load shedding while ensuring the safety of the system frequency. Fig. <ref> presents the safety margin calculated in Eq. (<ref>). Firstly, under different values of d (10MW, 25MW, and 50MW), we simulated the Safety and Economy index of KLS for various values of ζ ranging from 0 to 0.5. In Fig. <ref>, the green region represents the system satisfying Safety=1. The boundary between the green and white regions corresponds to an ζ that ensures system safety, while reaching a minimum load shedding amount. Next, under different values of d (10MW, 25MW, and 50MW), ζ was calculated using Eq. (<ref>). The calculated values of ζ obtained from Eq. (<ref>) are 0.152, 0.21, and 0.305, respectively. It can be observed that the ζs computed by the right hand side in (<ref>) are slightly larger than the ideal values obtained by iterating over different ζ values. This discrepancy arises from the difficulty in satisfying the inequality in Eq. (<ref>). However, the ζs obtained by Eq. (<ref>) ensure the system's frequency safety. §.§ Comparison with the traditional UFLS scheme To determine the load shedding amount for the conventional UFLS, we set the generator inertia rate to the value specified by the original manufacturer. Additionally, we introduce a G4 tripping fault to trigger the system dynamics, resulting in a minimum system frequency of 48.6 Hz. By employing the conventional load shedding approach, we conducted tests with different proportions of load shedding to determine an optimal value. This optimal value ensures that both the frequency nadir and steady-state value are maintained within the safety range, while minimizing the amount of load shedding. Consequently, this determined value is considered as the preassigned load shedding proportion when the frequency reaches a preset threshold. Subsequently, we evaluated the system's safety in the testing set by assessing whether the frequency, after shedding the preassigned proportion of load, remained within the safety range. These results were compared with the outcomes obtained through KLS, as depicted in Fig. <ref>. It is evident that shedding pre-assigned loads alone cannot ensure frequency stability under various operating conditions and power imbalances. However, KLS demonstrates its capability to adapt to such intricate variations in the inertia rate and the power imbalance. § CONCLUSION In this paper, a novel data-enabled predictive control algorithm KLS, which adapts to diverse operating conditions and under frequency events, is introduced to achieve the optimal one-shot load shedding for power system frequency safety. To address approximation inaccuracies and the restriction of load shedding to discrete values, a safety margin tuning scheme is incorporated within the framework of KLS. Simulation results demonstrate that KLS effectively captures latent variables strongly correlated with the inertia rate and power imbalance within a 300 ms time window after a fault occurs. The algorithm exhibits high prediction accuracy on both the training and testing sets, indicating its generalizability beyond the training set. Furthermore, the proposed safety margin tuning scheme enhances the system's frequency safety. IEEEtran § NETWORK ARCHITECTURE AND TRAINING §.§ Network architecture and loss function The network architecture is depicted in Fig. <ref>. The network is composed of three modules: the ResConv block, which acts as the latent extractor in this paper, as well as the Temporal Average and Fully Connected layers. The ResConv block utilizes convolutional layers to learn and identify the temporal patterns in the time series data. The inclusion of residual connections plays a vital role in enabling the stacking of multiple convolutional layers to effectively extract parameter variations in a state-space model from the time series data. The Temporal Average and Fully Connected layers are responsible for weighted averaging of time series features and feature extraction, respectively. The ResConv block comprises three convolutional layers with kernel sizes of 5, 3, and 3, respectively. A residual connection is incorporated by using a convolutional layer followed by a batch normalization layer. The Fully Connected module consists of three fully connected networks, employing ReLU activation functions. The latent extractor extracts uncertain model parameters from the time-delay embedding of partially measured information. Temporal Average and Fully Connection layers take the concatenated vector of the extracted m with observable variables x and y as input and produces φ (ω_t-τ :t,y_t-τ :t) as output. The loss function described in Eq.(<ref>) ensures that g_t satisfies the requirements of global linearization. Fig. <ref> illustrates how to utilize the output vector of the latent extractor to obtain information strongly correlated with the variations in the system parameter. The "Filter" represents the selection of output variables in the latent extractor that are most relevant to the inertia rate and power imbalance. The correlation coefficients between the selected variables, the system inertia rate and power imbalance are calculated. The results are given in Fig. <ref>. In general, our data-driven KLS framework is shown in Fig. <ref>. The loss function for the training of φ is given as follows. ℒ=∑_t=1^t'∑_t=t'^Tĝ_t-g_t For each data point in the training set, ĝ_t represents the predicted value at time t in the Koopman linear system. The loss function measures the discrepancy between the sequence of predicted values {ĝ_t, t=t',t'+1,...,T}, and the true values at each time step {g_t, t=t',t'+1,...,T}. We use the L_2 norm to quantify the discrepancy between the predicted and true sequences.
http://arxiv.org/abs/2307.00233v1
20230701054523
Hierarchical Federated Learning Incentivization for Gas Usage Estimation
[ "Has Sun", "Xiaoli Tang", "Chengyi Yang", "Zhenpeng Yu", "Xiuli Wang", "Qijie Ding", "Zengxiang Li", "Han Yu" ]
cs.LG
[ "cs.LG", "cs.IT", "math.IT" ]
The Potential of LEO Satellites in 6G Space-Air-Ground Enabled Access Networks Ziye Jia, Member, IEEE, Chao Dong, Member, IEEE, Kun Guo, Member, IEEE, and Qihui Wu, Senior Member, IEEEZiye Jia is with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, and also with the State Key Laboratory of ISN, Xidian University, Xi’an 710071, China (e-mail: [email protected]). Chao Dong and Qhui Wu are with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: [email protected], [email protected]). Kun Guo is with the East China Normal University, Shanghai 200241, China (e-mail: [email protected]). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Accurately estimating gas usage is essential for the efficient functioning of gas distribution networks and saving operational costs. Traditional methods rely on centralized data processing, which poses privacy risks. Federated learning (FL) offers a solution to this problem by enabling local data processing on each participant, such as gas companies and heating stations. However, local training and communication overhead may discourage gas companies and heating stations from actively participating in the FL training process. To address this challenge, we propose a Hierarchical FL Incentive Mechanism for Gas Usage Estimation (), which has been testbedded in the ENN Group, one of the leading players in the natural gas and green energy industry. It is designed to support horizontal FL among gas companies, and vertical FL among each gas company and heating station within a hierarchical FL ecosystem, rewarding participants based on their contributions to FL. In addition, a hierarchical FL model aggregation approach is also proposed to improve the gas usage estimation performance by aggregating models at different levels of the hierarchy. The incentive scheme employs a multi-dimensional contribution-aware reward distribution function that combines the evaluation of data quality and model contribution to incentivize both gas companies and heating stations within their jurisdiction while maintaining fairness. Results of extensive experiments validate the effectiveness of the proposed mechanism. § INTRODUCTION *These authors contributed equally to this work. Gas usage estimation is of paramount importance for energy companies like ENN Group[https://www.enn.cn/] as it enables them to accurately forecast and plan their gas purchase and distribution requirements. Accurate gas usage estimation ensures that the energy company can efficiently manage their gas distribution networks, avoid shortages or surpluses of gas supply, and ultimately minimize operational costs. Moreover, gas usage estimation is a critical component of the energy company's efforts to reduce their carbon footprint and meet their sustainability goals <cit.>. However, the success of traditional methods for gas usage estimation is heavily reliant on large volumes of high-quality data. However, data from a single company may not be sufficient to train models effectively since data are often collected and owned by different organizations within a given field. Collaborative model training <cit.> has been identified as a valuable technique to enhance the quality of ML solutions by leveraging the collective data resources of multiple organizations. Federated Learning (FL) is an important category of collaborative model training framework that has gained popularity due to its ability to protect data privacy and user confidentiality <cit.>. FL operates by having data owners (referred to as FL clients) train a local model using their private data samples, after which they submit model parameters (not raw training data) to a remote server. Once sufficient parameters from local models have been collected, a global model is aggregated and distributed to data owners for the next round of local training. This iterative process continues until the global model meets the predefined accuracy requirements. Through this training process, FL significantly enhances the data privacy of data owners since raw data is not uploaded. Despite the significant benefits of FL, it faces several critical challenges, making its further development and broader application in real-world industries challenging <cit.>. Firstly, the data owners or clients typically consume their own resources, such as computing and communication resources for local training. As a result, self-interested clients may not be willing to contribute their resources for FL model training unless they receive sufficient economic compensation. Secondly, some unreliable clients may engage in undesirable behavior, which can negatively impact the performance of the global model for an FL task. In particular, a client may maliciously disturb its data and send low-quality updates to mislead the global model parameters, resulting in the failure of collaborative learning. These factors have given rise to FL incentive mechanisms <cit.>, which can be defined as the process of identifying the most optimal payment and organizational structure for the federation to attain desired operational objectives. In this paper, we propose the Hierarchical Federated Learning Incentive Mechanism for Gas Usage Estimation (). is based on a hierarchical federated learning ecosystem composed of one horizontal FL and several vertical FL. The horizontal FL enables gas companies of ENN to leverage the historical gas supply information and weather data owned by others to make accurate gas usage estimations. On the other hand, the vertical FL is designed to facilitate collaboration between each gas company and the heating stations within their area of responsibility, taking into consideration that the data owned by each gas company and its associated heat stations are vertically-partitioned. To incentivize active participation and ensure fairness among gas companies and heating stations, we incorporate a multi-dimensional contribution-aware reward distribution function that considers both data quality and model contributions to . This hierarchical incentive scheme has proven effective in motivating participation and improving overall performance. has been successfully implemented in ENN Group in one province of China, and has allowed two gas companies in separate cities to improve their gas usage forecasting accuracy. It has been successful in motivating gas companies and heating stations to actively participate in FL training and commit high-quality data, resulting in increased revenue for these entities. To our best knowledge, it is the first successfully hierarchical federated learning incentive approach for the energy industry. § RELATED WORK In federated learning, incentive mechanisms typically involve addressing sub-problems such as contribution evaluation, node selection, and payment allocation, as highlighted in <cit.>. Of these, contribution evaluation is particularly relevant to our work, and we provide a brief survey of existing literature in this area. Existing approaches for contribution evaluation in federated learning can be broadly divided into four categories: self-reporting, individual performance, utility game, and Shapley Value (SV)-based methods <cit.>. Self-reporting approaches <cit.> measure participants' contributions based on their self-reported information regarding their sensitive local data such as data quantity, quality, committed computational and communication resources. For example, <cit.> proposes an incentive mechanism to compensate participants for their contributions and costs for joining the federation, measured based on self-reported data quantity and quality. However, this approach suffers from the possibility of dishonest reporting, where participants may overstate their contribution to receive a higher reward. As such, this approach is not ideal for large-scale and complex federated learning scenarios. Individual performance-based approaches <cit.> assign a contribution value to each participant based on their individual performance on specific tasks. For example, <cit.> measures individual contributions based on the similarity between local model updates and the aggregated FL model. While these approaches have been successful, they do not consider the contributions of other participants, which may lead to unfair reward distribution. Utility game-based approaches rely on the changes in coalition utility when a participant joins the federation <cit.>. In this category, there are three profit-sharing principles: egalitarian, marginal gain-based, and marginal loss-based. Fair value game, labor union, and Shapley Value-based game are the most common profit-sharing schemes. These methods may face challenges in designing a utility function that accurately reflects the contribution of each participant. SV-based approaches have been extensively researched in recent years due to their ability to calculate a participant's contribution fairly <cit.>. However, the original SV calculation can be computationally expensive due to its exponential nature. To improve efficiency, researchers have proposed various techniques such as random sampling Monte-Carlo (MC) estimation <cit.> and the use of the fisher Information Matrix <cit.>. These approaches reduce the number of model trainings needed to calculate SV, which may not be practical for large-scale FL applications. § THE PROPOSED APPROACH In this section, we will give a detailed description of the proposed , which is based on a hierarchical federated learning ecosystem and tries to fairly distribute rewards to participants in order to effectively motivate them actively join in the FL training, improving the gas usage estimation performance. §.§ Federated Learning under ENN Group's gas supply chain comprises two main participants: gas companies and heating stations. Gas companies purchase gas from external parties and then distribute it to heating stations within their jurisdiction. However, it is challenging for both gas companies and gas stations to make precise predictions based on their own data. Gas companies face the problem of data sparsity, which makes it difficult for them to train accurate gas demand prediction models using only their own data samples. Meanwhile, heating stations rely on heating strategists to develop daily heating plans and estimate gas usage based on manually-generated weather forecasts. However, the subjectivity involved in manually formulating strategies and the lack of high-precision weather forecast data available to heating stations can adversely affect the performance of these plans. In this sense, our solution involves two federated learning ecosystems: a horizontal one (HFL) show in fig. <ref> among gas companies and several vertical ones (VFL) shown in fig. <ref> among each gas company and the heating stations within in the area that it is responsible for. In the client-server HFL system, suppose there are in general n^H clients who can participate in FL-based gas usage estimation model training. Each client i owns a local dataset D_i^H ={(𝐱^H_j, y^H_j)}_j=1^|D^H_i|. 𝐱^H_j denotes the j-th local training sample. y^H_j denotes the corresponding ground truth label of 𝐱^H_j. |D^H_i| denotes the total number of data samples in D^H_i. The aim of HFL is to solve the following optimization problem under the aforementioned setting: min_θ^H∑_i=1^n^H|D^H_i|/|D^H|ℒ_i^H(θ^H;D^H_i), where θ^H denotes the parameters of the model. |D^H| = ∑_i =1^n^H |D^H_i| denotes the total number of samples. ℒ_i^H(θ^H;D^H_i) = 1/|D^H_i|∑_j=1^|D^H_i|l(θ^H;𝐱^H_j, y^H_j) denotes the local loss of a given client i. Different from HFL in which data are partitioned by sample, VFL assumes data are partitioned by feature. Let n^V denote the number of participants in the VFL model training. A dataset D^V = {𝐱^V_j, y^V_j}_j=1^|D^V| are partitioned across the n^V participants. Each participant is associated with a unique set of features. Take it for example, the i-th block features 𝐱_j,i of the j-th sample 𝐣 = [x_j,1^T, ⋯, x_j,n^V^T]^T are maintained by the i-th participant. Active Participant. The active participant is referred to the participant that holds not only data features but also labels of data samples. The active participant is the dominator during the VFL training since machine learning requires labels to derive the loss function <cit.>. Passive Participant. The passive participant is defined as the participant that only provides extra features during the VFL training but without labels of data samples <cit.>. Particularly, suppose the first participant is the active one, which means that the labels are partitioned to this participant. Following <cit.>, each participant i trains the model parameter θ^V with its own local raw features 𝐱_j,i with the aim of minimizing the loss function as follows: min_θ^V1/n^V∑_j=1^n^Vℒ^V(θ^V;D^V_j). §.§ Multi-dimensional Contribution-aware Reward Distribution As show in fig. <ref> and <ref>, the incentive mechanism supports both the HFL among gas companies and the VFL among each gas company and the heating stations within its jurisdiction. The incentive mechanism is composed of four parts: data quality calculation, model contribution calculation, revenue allocation ration calculation, and reward for each participant calculation. In the following, we will present how each part works in detail. §.§.§ Data Quality As depicted in figures <ref> and <ref>, the data quality evaluation model is utilized to preprocess the data of each participant before initiating the FL model training process. This model offers a data-centric approach to assess the quality of raw data, facilitating a precise estimation of the data quality. Specifically, in the case of HFL, historical gas usage and weather conditions significantly impact the current gas usage. Thus, we evaluate the data quality of each participant by analyzing the correlation between their historical gas usage and weather data with actual gas usage data. If the correlation is strong, the quality of the historical gas usage and weather data is high; otherwise, it is considered unsatisfactory <cit.>. Let 𝐗_i denote the historical gas usage and weather data of participant i for a continuous period of T days, and let 𝐘_i represent the corresponding actual gas usage data. The correlation between 𝐗_i and 𝐘_i can be formulated as: corr_score_i^H = Cov(𝐗_i, 𝐘_i)/√(Var(𝐗_i)Var(𝐘_i)), where the supscript H means that the equation is applicable for the HFL setting. Cov is the covariance function <cit.> and Var is the variance function <cit.>. Apart from the data quality, data quantity also reflect the quality of data. In this sense, in the HFL, we evaluate the quality of the data of each participant from these two perspectives. In specific, the data quantity value of participant i is formulated as: quant_score_i^H = |D_i^H|/|D^H|, where |D_i^H| is the total number of samples of participant i. |D^H| denotes the total number of samples in the HFL ecosystem. Then, we combine the data correlation score and the data quantity score to get the final evaluation result of the data quality of participant i in the HFL setting as follows: quality_i^H = corr_score_i^H ×quant_score_i^H. As mentioned previously, unlike the HFL approach where data is partitioned by samples, in the VFL setting, data is partitioned by features. Thus, we evaluate data quality of participant i in VFL solely from the perspective of data correlation, which is defined as follows: quality_i^V = corr_score_i^V, where the definition of corr_score_i^V is the same as that of corr_score_i^H. It is worth noting that, under the VFL setting, 𝐗_i represents either the historical heating strategies of heating stations or the weather information of the gas company but not both. §.§.§ Model Contribution Following existing incentive mechanisms in FL settings <cit.>, we also evaluate data from the perspective of model contribution. Specifically, to accurately assess the individual contribution of each participant in the federated learning process, each participant will train a local model exclusively on their own local data. These models are referred to as the Local Model as illustrated in figure <ref> and <ref>. Then, each participant predict the gas usage for T consecutive days based on its local model and compare the results with those actual gas usage to calculate the Symmetric Mean Absolute Percentage Error (SMAPE) <cit.>. SMAPE is a commonly used evaluation metric in forecasting and time series analysis. It is used to measure the accuracy of a model's predictions by comparing the actual and predicted values of a time series. The formula for SMAPE is as follows: SMAPE = 1/T∑_t=1^T|F_t - A_t|/(|F_t| + |A_t|)/2, where T is the number of time periods. F_t is the forecasted value at time t. A_t is the actual value at time t The SMAPE metric measures the difference between the actual and predicted values, normalized by the average of the actual and predicted values. Unlike other percentage error metrics, SMAPE takes into account both the magnitude and direction of the error. Additionally, SMAPE is symmetric, meaning that overprediction and underprediction are weighted equally. The resulting SMAPE score is expressed as a decimal, with lower values indicating better accuracy. Due to the wide range of values (0% to 200%) produced by the SMAPE metric, we have modified it to ensure that its results always fall within the range of 0 to 100%, for ease of subsequent calculations. We apply this modified variant of SMAPE to calculate the prediction error of each participant i's local model as: SMAPE_new^local_i = 1/T∑_t=1^T|y_i,t^local - ŷ_i,t|/(|y_i,t^local| + |ŷ_i,t|), where ŷ_i,t represents the actual gas usage at time t of participant i, and y_i,t^local represents the prediction generated by the local model of participant i at the same time t. Similarly, we get the prediction error of the global model on gas usage of participant i': SMAPE_new^global_i = 1/T∑_t=1^T|y_i,t^global - ŷ_i,t|/(|y_i,t^global| + |ŷ_i,t|), where y_i,t^global is the predicted gas usage of participant i at time t generated by the global model. Then, we get the accuracy of the local model and global model from the perspective of participate i as acc_i^local = 1 - SMAPE_new^local_i, acc_i^global = 1 - SMAPE_new^global_i. Based on acc_i^local and acc_i^global, we can get the increment for participant i: increment_i = acc_i^global - acc_i^local. Here, increment_i represents the benefit the participant i got from the contribution of other participants by joining the FL training. Finally, we calculate the contribution of participant j for the FL ecosystem as contribution_j = ∑_i ≠ jacc_i^global - acc_i^local/n-1, where n is the number of participants in the global model training. §.§.§ Revenue Allocation Ratio To scale the data to a similar range and reduce bias, we normalize the quality and contribution of each participant i as follows: quality_i^norm = quality_i/∑_j=1^n quality_j, contribution_i^norm = contribution_i/∑_j=1^n contribution_j. Here, n is the number of participants. §.§.§ Reward for each participant Let R_data and R_model represent the total reward for data quality and model contribution, respectively. Then, the data quality reward and model contribution reward for participant i are calculated as: r_i^quality = R_data×quality_i^norm, r_i^contribution = R_model×contribution_i^norm. § EXPERIMENTAL EVALUATION In this section, we present results on testbedding in ENN Group energy plants across two cities. Particularly, as the key contributions of this work are the hierarchical federated learning incentive mechanism and the multi-dimensional contribution-aware reward distribution mechanism, we conduct experiments to answer the following research questions: * RQ 1: Is the proposed incentive mechanism useful for motivating participants actively make commitment to the FL ecosystem? * RQ 2: Whether the proposed method can comprehensively and accurately measure the contributions of all participants, and distribute rewards fairly based on their individual merits? * RQ 3: Is it possible for the proposed multi-dimensional contribution-aware reward distribution mechanism to effectively evaluate the quality of data provided by participants? In what follows, we will answer the above research questions one by one. §.§ Results and Discussion for RQ 1 Table <ref> presents a comparison of the data quantity, data quality value, and model contribution value for one gas company in the ENN Group before and after adopting the proposed . It is observed that the data quantity of the gas company decreases, while the data quality value increases significantly. This is because the gas company actively removed low-quality data when committing to the FL ecosystem after adopting . This indicates that the proposed effectively motivates participants to provide high-quality data. Furthermore, as shown in Table <ref>, the model contribution value of the gas company improves by 20.22%. This can be attributed to the gas company being motivated to provide more high-quality data to the FL ecosystem, thereby contributing more to the improvement of the entire ecosystem. It is worth noting that the gas company receives a total reward increase after adopting . However, due to privacy concerns, we cannot disclose the exact results. §.§ Results and Discussion for RQ 2 The data quantity, data quality value, and the reward allocation ratio based only on the data quality value of gas company A and B in the HFL ecosystem are shown in Table <ref>. It is easy to see that the gas company with more data committed is with higher data quality value and gets higher reward allocation ratio, which means higher reward. In this sense, the proposed can effectively evaluate the contribution of each participant, based on which fairly distributes reward to each participant. Table <ref> compares the two gas companies in terms of data quantity, model contribution value, and reward allocation ratio, calculated using Eq. (<ref>). Gas company A's reward allocation ratio increases from 0.0516, as shown in Table <ref>, to 0.1844 in Table <ref>. This increase is attributed to the high quality of the data provided by gas company A. The high-quality data improves gas company A's model contribution value, which, in turn, leads to a higher final reward allocation ratio. This indicates that even with a smaller amount of data, participants can still receive higher rewards as long as the data quality is high. §.§ Results and Discussion for RQ 3 To assess the effectiveness of in motivating heating stations of the VFL ecosystem to truthfully report their heating strategies and provide high-quality data, we compared the data quality value and model contribution value generated by randomly reported strategies and truthfully reported strategies. The results are presented in Tables <ref> and <ref>. Our analysis revealed that regardless of whether based on historical or real-time data, the data quality values for truthfully reported strategies are much higher than those for strategies generated randomly based on experience. Additionally, the model contribution value for truthfully reported strategies is significantly higher than that for randomly generated strategies during both the training and inference phases. These results suggest that is effective in motivating participants to truthfully commit their data. In a nusthell, the proposed has been effective in motivating participants to contribute data and participate in federated learning, leading to the creation of higher-accuracy models and significant cost saving. § CONCLUSIONS AND FUTURE WORK In this paper, we propose a Hierarchical FL Incentive Mechanism for Gas Usage Estimation, which we implemented in the ENN Group, a leading player in the natural gas and green energy industry. Our proposed mechanism involves a hierarchical FL ecosystem that includes horizontal FL among gas companies and vertical FL among each gas company and the heating stations within its jurisdiction. We also developed a hierarchical incentive scheme that rewards participants based on their contributions to FL. The hierarchical aggregation approach enhances the gas usage estimation performance by aggregating models at different levels of the hierarchy. The incentive scheme employs a multi-dimensional contribution-aware reward distribution function that evaluates both data quality and model contribution to incentivize both gas companies and heating stations within their jurisdiction while ensuring fairness. Extensive experiment results validate the effectiveness of our proposed mechanism. In the future, we will comprehensively evaluate the proposed on more larger quantity of industrial datasets from more perspectives. In addition, we plan to improve the robustness of the proposed incentive mechanism against malicious participants <cit.> and further enhance fairness <cit.>. named
http://arxiv.org/abs/2307.01669v1
20230704120432
Electromagnetic gyrokinetic instabilities in the Spherical Tokamak for Energy Production (STEP) part II: transport and turbulence
[ "Maurizio Giacomin", "Daniel Kennedy", "Francis J Casson", "Ajay C. J.", "David Dickinson", "Bhavin S. Patel", "Colin M. Roach" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
Electromagnetic gyrokinetic instabilities in STEP part II]Electromagnetic gyrokinetic instabilities in the Spherical Tokamak for Energy Production (STEP) part II: transport and turbulence ^1York Plasma Institute, University of York, York, YO10 5DD, United Kingdom ^2Culham Centre for Fusion Energy, Abingdon, OX14 3DB, United Kingdom ^3Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom [email protected] In this work, we present the results of first-of-their-kind nonlinear local gyrokinetic simulations of electromagnetic turbulence at mid-radius in the burning plasma phase of the conceptual high-β, reactor-scale, tight-aspect-ratio tokamak STEP (Spherical Tokamak for Energy Production). A prior linear analysis in D.Kennedy et al. submitted to Nucl. Fusion <cit.> reveals the presence of unstable hybrid kinetic ballooning modes and subdominant microtearing modes at binormal scales approaching the ion-Larmor radius. Local nonlinear gyrokinetic simulations, using three different codes, are in qualitative and quantitative agreement and suggest that hybrid kinetic ballooning modes drive very large turbulent transport in the absence of equilibrium flow shear. The heat flux rises to values that exceed the available heating power by orders of magnitude and the turbulent eddies are highly extended radially so that they may not be well described by the local gyrokinetic model. The saturated transport fluxes are extremely sensitive to equilibrium flow shear, and diamagnetic levels of flow shear can suppress the fluxes to more reasonable values on the chosen surface. Given this sensitivity there is a large uncertainty in the saturated fluxes. The possible transport impact of the subdominant microtearing modes is also analysed in isolation by artificially and unphysically removing compressional magnetic perturbations from nonlinear calculations, to suppress the dominant hybrid kinetic ballooning mode. The microtearing heat flux is found to saturate at negligible values, though we cannot exclude the possibility that microtearing turbulence may be more transport relevant in other regions of parameter space. § INTRODUCTION Understanding and predicting turbulence in the core of next-generation spherical tokamaks (STs) is critical for the optimisation of their performance. While there has been progress over the past two decades, previous attempts to predict turbulent transport from local nonlinear electromagnetic gyrokinetic (GK) simulations for conceptual ST power plant equilibria have found large runaway transport fluxes dominated by magnetic flutter at low wavenumber, with heat fluxes orders of magnitude greater than the available heating power <cit.>. By leveraging the properties of STs, the UK STEP programme aims to generate net electric power P_el > 100 MW from fusion <cit.>, by developing a compact prototype power plant based on the ST. The first phase of this ambitious programme is to provide a conceptual design of a STEP prototype plant and a reference equilibrium for the preferred flat-top operating point. In order to be economically competitive, ST power plants such as STEP require a high ratio of thermal pressure to magnetic pressure, β, in the tokamak core. At high β the turbulence is expected to become more electromagnetic in nature, and influenced by unstable kinetic ballooning modes (KBMs) and microtearing modes (MTMs), as frequently reported for STs (see <cit.> and references therein). Whilst GK simulations have thus far proven to be a reliable tool for modelling turbulent transport in predominantly electrostatic regimes in conventional aspect ratio tokamaks at low β, obtaining saturated nonlinear simulations of plasmas at higher β with unstable KBMs and MTMs is computationally and conceptually more challenging. A runaway transition to very large heat flux can occur in local nonlinear simulations even at β values that are smaller than the critical threshold for the onset of the KBM instability, when the dominant instability is an ion temperature gradient (ITG) or a trapped electron mode (TEM) <cit.>. It remains an open question as to whether these runaway transitions are robustly physical, or a result of important physics being missing in the local GK model. Various plasma reference scenarios for STEP have been developed by using the integrated modelling suite JINTRAC <cit.> using the JETTO transport module, and assuming a Bohm-gyro-Bohm model <cit.> that has been recalibrated to have dominant electron heat transport and H_98 = 1.3 in STEP conditions <cit.>. Given the likely more significant role of electromagnetic instabilities in the core of high-β STs with respect to conventional tokamaks, the transport and confinement assumptions used to develop the STEP scenarios need to be assessed by means of first-principles GK simulations, which motivates the present work. Among the different plasma reference scenarios, we focus here on STEP-EC-HD-v5 [SimDB UUID: 2bb77572-d832-11ec-b2e3-679f5f37cafe, Alias: smars/jetto/step/88888/apr2922/seq-1](hereinafter STEP-EC-HD), designed to deliver a fusion power P_fus = 1.8 GW. The global parameters for this plasma scenario are reported in <cit.>. A contour plot of the poloidal magnetic flux Ψ in STEP-EC-HD is shown in Figure <ref>, alongside radial profiles of the three plasma species (electron, deuterium and tritium) included in the following gyrokinetic analysis, which is carried out at the q=3.5 surface (where the normalised poloidal flux is Ψ_n=0.49), which is highlighted in red in Figure <ref>. A Miller parameterisation <cit.> was used to model the local plasma equilibrium, and the shaping parameters were fitted to the chosen surface using <cit.>, a python library developed to facilitate pre- and post-processing of gyrokinetic analysis using a range of different GK codes. was also used throughout to facilitate the conversion of input files between the different local GK codes used in this work. Table <ref> provides the values of various local equilibrium quantities at the q=3.5 (Ψ_n=0.49) surface: magnetic shear, ŝ; normalised minor radius, ρ/a; elongation and its radial derivative, κ and κ'; triangularity and its radial derivative, δ and δ'; radial derivative of the Shafranov shift, Δ'; temperature T and density n for electrons, deuterium and tritium, and their normalised inverse gradient scale lengths. Whilst this analysis is restricted to Ψ_n=0.49, we expect results and conclusions to be relevant across a range of surfaces that share qualitatively similar linear microstability properties in this STEP reference plasma <cit.>. The local linear GK analysis of the q=3.5 surface (Ψ_n=0.49) in STEP-EC-HD is reported in a companion article <cit.>. This analysis (see in particular Figures 19 and 20 of <cit.>) shows the presence of a dominant ion Larmor radius scale hybrid KBM instability that couples to a trapped electron mode (TEM) and to an ion temperature gradient (ITG) instability at low β. A subdominant MTM was found to be unstable on a subset of these binormal scales. No unstable microinstabilities are observed at the electron Larmor radius scale. The linear growth rate and mode frequency are shown in Figure <ref> as functions of the binormal wave-number k_yρ_s = nρ_*dρ/dΨ_n (where n is the toroidal mode number, ρ=r/a is the normalised minor radius, ρ_*=ρ_s/a, with ρ_s = c_s/Ω_D the deuterium Larmor radius, c_s=√(T_e/m_D) the deuterium sound speed, Ω_D=eB_0/m_D the deuterium cyclotron frequency, and B_0 the vacuum toroidal magnetic field at the centre of the chosen flux surface). Reference <cit.> also points out the critical role played by compressional magnetic fluctuations (δ B_∥). If δ B_∥ is artificially excluded from calculations, the hybrid KBM is entirely stabilised while the linear properties of the MTM are unaffected. Here we model the turbulent transport that might be expected to be driven by these two classes of mode. The main contributions of this work are: (a) to provide a detailed study of turbulent transport using local nonlinear gyrokinetic simulations in the core region of a STEP flat-top operating point; (b) to assess the compatibility of the predicted transport with the anticipated sources in STEP; (c) to explore different strategies for reducing transport in the conceptual STEP design; (d) to compare results using three local GK codes (simulations are performed using CGYRO <cit.> (commit ), GENE <cit.> (commit ), and GS2 <cit.> (commit )) in order to build confidence in transport predictions in this novel regime. In all the simulations, we retain three particle species (electrons, deuterium and tritium) as well as electromagnetic effects. Simulations are performed to separately study the hybrid KBM-driven turbulence (by retaining both perpendicular and parallel magnetic field fluctuations) and MTM-driven turbulence (where hybrid KBMs are artificially suppressed through the neglect of δ B_∥). In all cases we use full expressions for the ∇ B and curvature drifts without approximation (i.e. we retain the ∇ p contribution to the curvature drift). Collisions are modelled using the advanced Sugama collision model <cit.> in CGYRO and GENE, and the linearised Fokker-Planck collision model of <cit.> in GS2. The paper is organised as follows. In sec:hybrid KBM instability, we present the results of nonlinear local (flux-tube) gyrokinetic simulations including δ B_∥ carried out with CGYRO, GENE and GS2, and we discuss the sensitivity to the numerical parameters. The effect of equilibrium flow shear is investigated in sec: equilibrium flow shear. In sec: sensitivity to local parameters, we evaluate the sensitivity of the heat flux to several key local equilibrium parameters. The results of nonlinear simulations where δ B_∥ is artificially suppressed and the turbulence is driven solely by MTMs are shown in sec:subdominant MTM instability. Finally, a summary of our findings is presented in sec:conclusions. § NONLINEAR SIMULATIONS INCLUDING TEXT AND CROSS-CODE COMPARISON We analyse here the results of nonlinear gyrokinetic simulations of the main hybrid KBM instability. As reported in <cit.>, this instability is dominant at binormal scales approaching the ion Larmor scale across a range of different flux surfaces between the core and the pedestal top. These simulations are fully electromagnetic and include the perturbed electrostatic potential δϕ, perturbed parallel magnetic vector potential δ A_∥, and perturbed parallel magnetic field δ B_∥. Table <ref> lists the numerical resolutions used in these simulations. It is important to note that each code uses a different discretisation of the 5D space and most notably different coordinate systems in velocity space [In gyrokinetics velocity space is 2D, involving the velocity component parallel to the magnetic field v_∥ and the magnitude of the perpendicular velocity component v_⊥, with the total velocity v^2=v_∥^2 +v_⊥^2.]: CGYRO uses (χ, v), where χ=v_∥/v is the cosine of the pitch angle; GENE uses (μ, v_∥), where μ=v_⊥^2/(2B) is magnetic moment normalised by the species mass; and GS2 uses (λ, ϵ, σ)[The GS2 trapped velocity and parallel real space grids are tightly coupled, with the number of trapped pitch angles n_trap = n_θ/2 + 1 where n_θ is the number of parallel grid points. The total number of pitch angle grid points n_λ = n_trap + n_pass, where n_pass is the (unconstrained) number of passing pitch angles.], where λ = v_⊥^2/(v^2B), energy ϵ=m_sv^2/(2T_s) and σ=v_∥/|v_∥|. A total of 64 toroidal modes are evolved to span the full range, 0.01< k_y ρ_s< 0.63, where modes are unstable (see Figure <ref>). At the minimum finite k_y ρ_s (0.01) the toroidal mode number is n=2. The radial box size, L_x = j/(ŝ k_y,min), is set, using integer j, to 166 ρ_s. The velocity grids are set up to encompass a region up to v_max = 3v_th,s, where the thermal velocity of species s v_th,s = √(2T_s/m_s). Given the lack of an external source of momentum in STEP, we neglect equilibrium rotation though 𝐄×𝐁 shearing is considered in sec: equilibrium flow shear. The radial resolution in these simulations is set to be sufficient to resolve the hybrid KBM instability[Nonlinear simulations using CGYRO or GENE with the resolution in table <ref> typically require approximately 3× 10^5 CPU-hours on the ARCHER2 high performance computing system.], but is too coarse to resolve the subdominant MTM instability: MTMs require higher radial resolution due to their intrinsically multiscale nature as they include structures at ion Larmor scale in k_yρ_s and electron Larmor scale in k_xρ_s. Here we focus on the dominant hybrid-KBM instability, while the subdominant MTM instability is specifically addressed using higher radial resolution grids in sec:subdominant MTM instability. Attempting to resolve both modes simultaneously drastically increases the computational cost and appears to make no qualitative or quantitative change to the simulation results on the timescales considered here. Figure <ref> (a) and (b) shows time traces of the total heat and particle fluxes from nonlinear simulations using CGYRO, GENE and GS2. In all simulations the fluxes rise to very large values with no robustly steady saturation periods over the times simulated. The transport contributions from the electrostatic and electromagnetic channels are both substantial, as shown in Figure <ref> (c) and (d). Arguably turbulence pseudo-saturates transiently at total heat flux values of approximately Q_tot≃ 200 MW/m^2 between t≃ 150 a/c_s and t≃ 200 a/c_s. We note that this would correspond to a total power crossing the q=3.5 surface P_tot>100 GW, which is orders of magnitude larger than the available heating power P_surf≃ 500 MW assumed for this STEP flat top operating point <cit.>. After this very short pseudo-saturated phase, the heat flux rapidly grows to Q_tot values that exceed 1000 MW/m^2. There is a good qualitative agreement between the three codes, both in the very short pseudo-saturation phase and in the following evolution to yet larger fluxes. The time and θ averaged δϕ(k_x, k_y), δ A_∥(k_x, k_y) and δ B_∥(k_x, k_y) spectra are compared in Figure <ref>, where the time average is performed between t= 150 a/c_s and t= 200 a/c_s. There is reasonable qualitative agreement between spectra from different codes, albeit with differences especially for the high k_y modes where CGYRO and GS2 find higher amplitudes than GENE, and for high k_x where the amplitude in δϕ is lower in GS2 than in CGYRO and GENE. Mode at high k_y and k_x, however, do not contribute significantly to the turbulent heat fluxes, which agree reasonably well in this timeframe in Figure <ref>. Comparing the high k_x modes, we note that the amplitude of δϕ is smaller in GS2 than in CGYRO and GENE, but turbulent transport is mainly driven by low k_x modes, where the agreement between the three codes is relatively good. The zonal ϕ and A_∥ components are larger in CGYRO and GENE than in GS2. We note that δ B_∥/B_0 is consistently smaller than e δϕ/T_e and A_∥/(ρ_s B_0) in all three codes, and δ B_∥ (like δ A_∥) spectra are relatively narrow around k_x=0. We highlight that the amplitude of the δ A_∥ modes at low k_y values are relatively large in all three codes. The time evolution of the k_y spectrum of the heat flux are shown in Figure <ref> from CGYRO and GENE simulations. These indicate that the large fluxes at late times are mostly carried by low k_y modes, in a state where Figure <ref> shows that the amplitudes of non-zonal modes exceed the amplitudes of zonal modes. Snapshots of δϕ and δ A_∥ in real space are shown in Figure <ref> from late high heat flux phases of the CGYRO and GENE simulations at t=400 a/c_s and t=600 a/c_s respectively. We note that both δϕ and δ A_∥ extend radially though the entire flux-tube domain and turbulence of such a character is not well modelled using the local approach. We note that turbulence of such a character questions the applicability of the local approach. Convergence of these results is investigated by performing a set of CGYRO and GENE nonlinear simulations with a range of numerical resolutions given in table <ref>. Figure <ref> shows that time traces of the total heat flux from these simulations agree very well, suggesting these local simulations are well resolved. We note in particular that increasing the size of the radial domain does not avoid large heat fluxes. In brief summary, local gyrokinetic simulations neglecting equilibrium flow shear for the nominal parameters of a core flux surface of STEP-EC-HD evolve towards large turbulent heat fluxes that are orders of magnitude larger than the available heating power. The predictions of local GK, as it stands (notably, without 𝐄×𝐁 shear), are clearly not compatible with the confinement assumptions made in the design of this STEP flat top operating point. The reproducibility of this result across three established local gyrokinetic codes lends support to the conclusion that these large heat fluxes (often reported in simulations of electromagnetic turbulence) are a robust prediction of local GK and are not a numerical artefact. Importantly we cannot rule out the possibility that this prediction is simply due to the failure of the local GK approximation at small mode number: this possibility is supported by the fact that in all three codes the turbulent state becomes dominated by large amplitude low k_y modes that are very highly extended radially. Testing this potential failure of local GK will be addressed in the future using a global GK code, but this is beyond the scope of this paper. In the following section, we will explore the effect of equilibrium flow shear. § EFFECT OF THE EQUILIBRIUM FLOW SHEAR Sensitivity of the linear growth rate to the ballooning parameter θ_0 = k_x/(ŝk_y) is a reliable indicator of a mode's susceptibility to equilibrium flow shear stabilisation. The linear analysis in <cit.> shows that the growth rate of the hybrid-KBM instability is highly sensitive to θ_0 and that the mode is stable for all θ_0 above a very small value. This indicates that hybrid-KBM turbulence should be very sensitive to flow shear, which has so far been neglected in our nonlinear simulations. With no external momentum source from neutral beam injection in STEP equilibria, the flow shearing rate γ_E is expected at the diamagnetic level (see e.g. <cit.>): γ_E ≃γ_dia =1/B(∂Ψ/∂ρ)^2[1/p_i n_i e(1+η_i)(∂ p_i/∂Ψ)^2-1/n_i e∂^2p_i/∂Ψ^2] where η_i = L_n/L_T_i is the ratio of the density to the ion temperature gradient scale lengths, and p_i is the ion pressure. The value of γ_E at Ψ_n = 0.49 in STEP-EC-HD is approximately γ_E ≃ 0.06 ± 0.02 c_s/a, where the uncertainty is computed by taking the standard deviation over a radial interval Ψ_n ∈ [0.48, 0.52] around the chosen surface. We start by comparing nonlinear GENE and CGYRO simulations including equilibrium flow shear with γ_E=0.1 c_s/a, using the flow shear method recommended for each code. CGYRO favours a spectral approach to impose a radially periodic equilibrium flow <cit.>, whereas GENE implements a radially constant 𝐄×𝐁 shearing rate by shifting the k_x grid in time <cit.>. GENE uses the wavenumber remapping method and includes continuous shearing in the nonlinear term detailed in <cit.> that had been neglected in earlier implementations of this approach. The diamagnetic shearing rate in Equation <ref> is determined by the ion pressure profile and its gradients. Accordingly, a twofold effect is expected as pressure increases through the plasma discharge: on the one hand increasing dp_i/dr will act to increase instability drives; and on the other rising diamagnetic flow shear and local shear at the outboard midplane will increasingly act to suppress turbulence. Here we assess the impact of flow shear on fully developed turbulence at our chosen surface of the STEP-EC-HD flat-top, using CGYRO and GENE nonlinear simulations where the external flow shear is turned on only after the turbulence and fluxes have reached large amplitudes. Numerical convergence, which will be discussed further in subsec:challenges_with_flow_shear, requires a large radial box width corresponding to Δ k_xρ_s = 0.01. The time traces of the total heat flux from these simulations are shown in Figure <ref>, where external flow shear, included from t=300 a/c_s, triggers a sharp reduction in the heat flux in both codes. The heat flux decays more rapidly in GENE than in CGYRO, but at late times the saturated heat flux is comparable in both simulations. This is illustrated in Figure <ref>(b), where the error bars correspond to the standard deviation of the flux computed over the saturated phase of each simulation (after t=1400 a/c_s and t=500 a/c_s in CGYRO and GENE, respectively). The total saturated heat flux is approximately Q_tot≃ 2 MW/m^2, which corresponds to a power crossing the surface of P_tot≃ 600 MW. This is close to the total heat source of 500 MW assumed in STEP-EC-HD. Whilst these results are promising, it is important to highlight that the quantitative heat flux prediction comes with a large uncertainty that will be discussed further in subsec:challenges_with_flow_shear. Nevertheless, the results in Figures <ref> and <ref> indicate that equilibrium flow shear suppression of turbulent transport will be very important in these plasmas. Figure <ref> illustrates results from a scan to assess the sensitivity of the heat and particle fluxes to γ_E. In each simulation, the turbulence is allowed to develop large fluxes before the flow shear is switched on at t=300 a/c_s so that the initial phase of these simulations is identical to that in Figure <ref>. At γ_E = 0.05 c_s/a, both CGYRO and GENE predict a total heat flux larger than Q_tot≃ 5 MW/m^2 (corresponding to P_tot > 1.5 GW), although the GENE simulation shows much stronger flow-shear suppression of the fluxes. Both the heat and particle fluxes decrease as γ_E is increased further. We note that both codes agree very well qualitatively on the flow-shear suppression of turbulent fluxes, and the flux predictions are within a factor of four of each other at γ_E=0.1 c_s/a and γ_E=0.2 c_s/a. At γ_E≳ 0.1 c_s/a the particle flux drops below Γ_tot≃ 10^20 m^-2s^-1, corresponding to a total particle flux of approximately 10^22 s^-1 crossing the chosen surface, which is comparable to the expected fuelling rate [A pellet fuelling rate of 7.4 × 10^21 particles/s is used in the modelling of the STEP-EC-HD flat-top, based on an assumption that the particle confinement time τ_p ∼ 4.5 τ_E (as it is typical in JET). This results in τ_p ∼ 16 s. Assuming the same confinement time for helium ash gives a saturated helium abundance of about 9% and higher particle confinement would severely degrade fusion performance due to the core accumulation of helium ash.]. The electrostatic and electromagnetic contributions to electron heat flux, ion heat flux and particle flux are reported in Table <ref>. The electromagnetic electron heat flux gives the largest contribution to the total heat flux at all the γ_E values considered here. The electromagnetic particle flux is significantly larger than the electrostatic one in GENE simulations with γ_E≥ 0.1 c_s/a, while the two contributions are comparable in CGYRO simulations as well as in GENE simulation with γ_E=0.05 c_s/a. The values of the effective total heat and particle diffusivities, χ_tot=Q_tot/(∑_s n_s∂ T_s/∂ r) and D_tot = Γ_tot/(∂ n/∂ r), are also reported in table <ref> for each nonlinear simulation. §.§ Challenges simulating flow shear in STEP-relevant plasmas Although we have clear evidence to suggest that flow shear has a strongly suppressing impact on the turbulence, it is important to highlight the caveats and challenges associated with these simulations. §.§.§ Numerical implementation of flow shear The cross-code comparison with GENE and CGYRO has revealed some large differences in the modelled impact of flow shear on fluxes especially at low γ_E, e.g., the simulations with γ_E = 0.05 c_s/a in Figure <ref>. This is not surprising and is likely at least partly due to the different numerical implementations of flow shear in CGYRO and in GENE (we have tested this using CGYRO with an optional implementation of flow shear based on wavenumber remapping <cit.> that neglects the nonlinear shearing term <cit.>: nonlinear simulations for STEP using this flow shear implementation in CGYRO are found in <ref> to give very similar results to comparable GENE simulations that also exclude the nonlinear shearing term). This work highlights the value and importance of multi-code comparisons to point out research priorities, especially when accurate flux predictions are required in new plasma regimes where no experiments are available for validation. §.§.§ Possible failure of the local approximation The dependence of γ_ KBM on θ_0 is very sharp, with a narrow instability window in θ_0 (see Figure 15 of <cit.>). This poses resolution challenges for nonlinear simulations because it requires large radial box sizes in order to resolve sufficiently finely in k_x and θ_0. For the equilibrium considered here, at the k_y of the peak heat flux, k_yρ_s = 0.04, we found that Δ k_xρ_s ≃ 0.01 corresponds to a θ_0 resolution Δθ_0 ≃ 0.06 π. This resolution was both: (i) sufficient to resolve the strong θ_0 dependence of the linear growth rate at k_y ρ_s=0.4; and (ii) necessary to achieve convergence in both GENE and CGYRO. However, running local simulations in a such a large radial box pushes the validity of the local approximation. In particular, the value of Δ k_xρ_s=0.01 corresponds to a radial flux tube domain size of L_x≃ 600 ρ_s≃ 3 m, which is larger than the minor radius of the device[Nevertheless, we note previous work where local gyrokinetic simulations in a radially extended domain larger than the minor radius were used to model ITG turbulence in MAST, finding turbulence properties that were remarkably close to fluctuation measurements made using Beam Emission Spectroscopy <cit.>.]. §.§ Summary of flow shear findings Local GK simulations with CGYRO and GENE indicate that equilibrium flow shear should have a strongly suppressing impact on turbulent transport at this surface in STEP-EC-HD (see Figures <ref> and <ref>), and with flow shear close to the diamagnetic level the resulting heat flux is in the same ballpark as the anticipated heat source. While there are quantitative disagreements between codes (most notably at γ_E=0.05  c_s/a), this may at least in part be attributed to different numerical implementations of flow shear. Our local simulations with flow shear have required using radially extended domains where the local approximation may be more questionable, and it would be interesting to explore in the future whether local GK using a continuous-in-time approach to flow shear, like that outlined in <cit.>, improves numerical convergence especially in narrower radial domains (i.e. at higher values of Δ k_x). If the turbulent eddies are excessively extended radially, the local approximation may start to break down as global equilibrium profile variation effects become important. Our results motivate future research to perform global simulations, that must include δ B_∥ to capture the hybrid-KBM, to seek higher fidelity predictions of transport fluxes in STEP-like plasmas. A key future priority will also be to validate these types of calculations in detail against data from experiments, especially from higher β plasmas getting closer to STEP-like regimes. The available validation data set will improve considerably following the planned enhancements for MAST-U to upgrade the Neutral Beam Injection heating system and install an Electron Bernstein Wave heating and current drive capability. Transport fluxes in STEP-EC-HD are clearly sensitive to flow shear, and in the next section we consider the sensitivity to other local equilibrium parameters. § SENSITIVITY TO LOCAL PARAMETERS The previous section suggest that the hybrid KBM instability can generate substantial transport even in the presence of 𝐄×𝐁 shear flows. This motivates the investigation of the dependence of the heat and particle flux on various local parameters, such as pressure gradient, β_e, q and ŝ. The analysis presented here is carried out by using CGYRO where a single local equilibrium parameter is varied. Throughout this section we have neglected sheared equilibrium flows, which would complicate the dependencies and may reduce fluxes if they were included. §.§ Sensitivity to linear drive terms at constant TEXT (inconsistent) We first consider a set of simulations carried out at different values of pressure gradient (we scale both the electron and ion temperature gradients whilst keeping the density gradients fixed) a/L_p ∈{2.73, 2.33, 1.93, 1.53}. All other local equilibrium parameters were held fixed including β^' (which is inconsistent but preserves the local equilibrium magnetic field geometry). Since the pressure gradient is the main drive of the hybrid KBM instability, the saturated total heat flux is expected to decrease as pressure gradient is reduced and this is confirmed in Figure <ref>(a). Q_ tot has a very stiff dependence on a/L_p and decreases by more than two orders of magnitude when a/L_p is reduced by less than 20% with respect to its nominal value. The heat flux then decreases slightly from a/L_p=2.3 to a/L_p=1.9 and becomes negligible at a/L_p=1.53. §.§ Sensitivity to linear drive terms varying TEXT (consistent) Here we repeat the above scan in pressure gradient, but setting β' to be consistent with a/L_p. This ensures the local equilibrium magnetic geometry is modified consistently with the local pressure gradient through the scan. Figure <ref>(b) shows the saturated heat flux value from these simulations. The heat flux dependence on a/L_p is much weaker in this case, particularly near the nominal value. The heat flux decreases from a/L_p=2.3 to a/L_p=1.5, but it remains very large (Q_tot>200 MW/m^2) and substantially higher than the maximum steady state flux available from the expected heat source in STEP-EC-HD. The dramatic difference between Figures <ref>(a) and (b) is due to the fact that in (b) the reduced linear drive at lower pressure gradient drive is largely off-set by a loss of β' stabilisation (negative local magnetic shear and magnetic drifts are more favourable for stability at higher β' <cit.>, which is also associated with the onset of internal transport barriers <cit.>). As shown in <cit.>, high β' has a strong linearly stabilising influence on the hybrid KBM on this surface in STEP-EC-HD. A transport steady state consistent with the available heating source in STEP-EC-HD cannot easily be achieved on this surface simply by reducing the reference pressure gradient. We also note that our GK analysis has neglected α particles that should impact on turbulence in the flat-top burning plasma equilibrium (α particles will interact directly with the turbulence and enhance the plasma pressure). A future work will explore the influence of fast α particles. This work highlights the need for further development of the STEP-EC-HD plasma scenario and for systematic studies over the available parameter space, though these lie beyond the scope of the current study. §.§ Sensitivity to TEXT varying TEXT (consistent) We focus now on the effect of β by performing nonlinear CGYRO simulations[Numerical resolutions were chosen to properly resolve the linear spectrum of the dominant instability (see <cit.>). In particular, the simulations with β_e=0.02 and β_e=0.005 are carried out with n_k_y=128 and n_r=64, while the simulation with β_e=0.16 is carried out with n_k_y=64 and n_r=256.] at several values of β_e, varying β' consistently. Figure <ref> shows the saturated total heat and particle fluxes from nonlinear simulations at different values of β_e. At β_e < 0.025, the particle and heat transport is mainly electrostatic and is driven by an ITG/TEM instability (the KBM instability couples to an ITG/TEM instability at low β as noted in <cit.>). We note that the electrostatic heat and particle fluxes remain above 100 MW/m^2 and 10^22 particles/(m^2s) even at β_e = β_e,ref/4 (presumably due to weakened β' stabilisation). This scan suggests turbulent transport could challenge access to the burning flat-top through the lower β phase during the I_p ramp-up, although considerably more detailed self-consistent scenario modelling is clearly required before firmer conclusions can be drawn. At the higher value β_e=0.16, electromagnetic contributions dominate transport and the total heat flux drops considerably to Q_tot≃ 2 MW/m^2, corresponding to a power crossing the surface of approximately P_tot≃ 600 MW. This is a consequence of increased β' stabilisation of the hybrid KBM, leaving the MTM (with linear growth rates hardly affected by β^' <cit.>) surviving as the dominant instability. This β_e=0.16 operating point would have considerably more favourable transport than the reference local equilibrium at β_e=0.09, if it could be accessed through a route that avoided prohibitive transport losses from hybrid-KBMs. However, a global equilibrium with this local β_e would likely exceed the limiting β for effective control of Resistive Wall Modes, as noted in <cit.>. Figure <ref> is an ŝ-α diagram (where α = -Rq^2β^') showing the ideal n=∞ ballooning boundary and how the local equilibrium point moves in this space over the scan. The equilibrium points at β_e=0.02 and β_e=0.16 are further from the stability boundary then the reference equilibrium at β_e=0.09, suggesting a weakening the role for KBMs that would be consistent with reduced heat and particle fluxes. §.§ Sensitivity to TEXT and TEXT Finally, we analyse the effect of local safety factor and magnetic shear. Figure <ref> shows the saturated heat flux from a set of CGYRO nonlinear simulations with different values of q and ŝ. Consistent with the linear predictions of <cit.>, we find that the heat flux decreases as q increases or ŝ decreases. In particular, we note that the heat flux drops by almost two orders of magnitude when q is increased from q=3.5 (nominal value) to q=4.0, while the maximum growth rate decreases by approximately a factor of two. On the other hand, the heat flux decrease is less stiff (but still noticeable) when the magnetic shear is decreased. We can gain some important intuition for both of these results by looking at the ideal ballooning stability in the ŝ-α plane (see Figure <ref>): lowering ŝ moves the equilibrium further into the stable ŝ-α region, while raising q moves the unstable region to higher shear values and increases significantly the distance between the equilibrium point and the ideal ballooning stability boundary (consistent with lower fluxes at q=4.0). Vice versa, lowering q moves the unstable region to smaller shear values, which is consistent with larger fluxes at q=3.0. In summary, lowering ŝ or raising q increases the distance from the operating point to the ideal ballooning stability boundary. These results therefore motivate a simple strategy for future exploitation to design equilibria with lower turbulent transport: varying the key equilibrium actuators in the problem (e.g. shaping and profiles) and seek to maximise distance from the ideal ballooning boundary to reduce the impact of the hybrid KBM on transport. § THE ROLE OF THE SUBDOMINANT MTM INSTABILITY Results in sec: equilibrium flow shear show that the heat flux driven by the hybrid KBM instability can be significantly reduced by equilibrium flow shear. The subdominant MTM instability, linearly characterised in <cit.>, however is resilient to flow shear, and is expected to become important when the hybrid-KBM is suppressed. As shown in <cit.>, the hybrid KBM instability can be artificially suppressed by simply removing δ B_∥ from the GK system of equations (so only δϕ and δ A_∥ are evolved), leaving the MTM as the surviving dominant instability in the reduced system. This artificial suppression of the hybrid mode provides a way to study MTM-only-driven turbulence. We have adopted this approach to perform a dedicated nonlinear study of MTM turbulence on the reference surface in STEP-EC-HD. While it is unphysical to neglect δ B_∥, this study is nevertheless of considerable interest because (i) MTMs have been demonstrated to be unstable and even dominant over an extended range of binormal scales in various conceptual designs of fusion power plants based on the ST <cit.>, (ii) MTMs may be important where the hybrid-KBMs is suppressed (e.g. the high β, β^' local equilibrium in Figure <ref> or on surfaces analysed in <cit.>), or in other radial surfaces in STEP-EC-HD (e.g. at the pedestal) and (iii) MTMs have been demonstrated to have significant impacts on transport in other devices  <cit.> and are consistent with pedestal magnetic fluctuation measurements in several experiments <cit.>. The computational cost of modelling low k_y MTMs with conventional gyrokinetic codes is challenging due to the multiscale nature of the mode: δϕ is highly extended in the field-line-following coordinate θ, which accommodates high radial wavenumbers, while δ A_∥ is localised in θ and radially extended. Computational cost and difficulties in achieving saturation have limited the number of studies of saturated MTM driven turbulence in the literature <cit.>. Here we gauge the potential transport relevance of MTMs by performing nonlinear simulations with CGYRO and GENE artificially neglecting δ B_∥, using the numerical resolutions given in Table <ref>. Figure <ref> (a) shows the time traces of the total heat flux and demonstrates good agreement between codes that the MTM-driven heat flux saturates at a negligible level, Q_tot < 0.005 MW/m^2. The heat flux is entirely electromagnetic and almost entirely in the electron channel, in agreement with MTM previous gyrokinetic simulations <cit.>. Furthermore, MTMs drive effectively vanishing particle transport as expected for any electron driven instability that is insensitive to inclusion of the non-adiabatic ion response <cit.>. Zonal fields <cit.> and local temperature flattening <cit.>, are found to play a role in saturating this MTM instability at negligible heat flux values (see <ref>). Figure <ref> (b) shows time traces of the ratio of the non-zonal to the zonal δ A_∥ mode amplitudes, demonstrating that zonal fields dominate in both codes and suggesting that they are likely to be important for saturation. [Note that including an appropriate parallel dissipation scheme was found to be essential to avoid runaway nonlinear MTM fluxes <cit.>, due to the onset of dominant unphysical numerical instabilities with grid-scale oscillations in the parallel direction. This prompted an improvement to the parallel dissipation scheme in CGYRO, implemented at commit , that has been used in all of the simulations in this paper.]. Figure <ref> shows the δϕ and δ A_∥ spectra from CGYRO and GENE simulations, again demonstrating that both δϕ and δ A_∥ non-zonal mode amplitudes are significantly smaller than the corresponding zonal amplitudes. The spectra from CGYRO and GENE are qualitatively and quantitatively similar. In Figure <ref>, we show a snapshot of δϕ and δ A_∥ in real space at the intersection between the flux-tube with the outbpoard midplane, from the saturated states of CGYRO and GENE simulations. In both simulations, δϕ is characterised by radially narrow structures, whilst δ A_∥ exhibits streamers that are more extended in the radial direction. These are typical structures of MTM turbulence <cit.>. MTMs, previously identified from linear studies as potential drivers of significant electron heat transport in conceptual ST power plants <cit.>, appear to make an insignificant contribution to transport on this particular mid-radius surface in STEP-EC-HD. We cannot exclude the possibilities (i) that MTM transport may dominate that from hybrid-KBMs under different local equilibrium conditions, and (ii) that multi-scale interactions between MTMs and hybrid KBMs may be significant; these are important avenues to be explored. § CONCLUSIONS This paper presents the first nonlinear gyrokinetic simulations of turbulence and transport in the core of a conceptual STEP plasma reference flat-top operating point. Local nonlinear simulations were performed at the q=3.5 surface (where ψ_n=0.49) of the flat-top operating point STEP-EC-HD using three different well-established local gyrokinetic codes. All codes robustly predict that the hybrid KBM <cit.>, which requires inclusion of δ B_∥ for instability, drives very high heat fluxes in all channels; in the absence of equilibrium flow shear the total heat flux is orders of magnitude larger than the available heating power. Impressive cross-code agreement strongly supports that the large transport fluxes arise as a robust prediction of local nonlinear gyrokinetic model, and are not due to a numerical artefact. Simulations where the fluxes grow to large values are generally dominated by radially highly extended turbulent eddies at low k_y that may challenge the validity of the local equilibrium approximation. Higher fidelity modelling of such turbulence may require improvements to the local gyrokinetic model to include global and/or other higher order effects. These aspects should be explored in future research. Turbulence from hybrid-KBMs is found to be extremely sensitive to equilibrium flow shear, and γ_E is critical in determining the saturation level. When γ_E is set at the same order as the diamagnetic flow shear, the total turbulent heat flux from hybrid-KBMs falls by orders of magnitude and becomes comparable with the total heating power assumed for the STEP-EC-HD flat-top operating point. This extreme sensitivity to flow shear introduces large uncertainty in the expected level of the turbulent transport. Nonlinear scans were performed neglecting flow shear, to explore the sensitivity of hybrid-KBM transport fluxes to local equilibrium parameters, including: temperature gradient; β_e; q; and ŝ. Decreasing the temperature gradient via reductions in a/L_T_e and a/L_T_i with all other parameters constant including β^' (i.e. pressure gradient is not varied consistently) reduces the heat flux driven by the hybrid KBM considerably. The reduction in transport at lower temperature gradient is much weaker in a similar scan where temperature gradient and β^' are varied consistently, because the reduced linear drive is off-set by a reduction in β^' stabilisation. In a scan where both β and β^' are varied consistently (at constant a/L_n, a/L_T_i, and a/L_T_e), at large β_e (and more importantly large β') hybrid-KBM turbulence is suppressed, leaving a much lower level of turbulent transport driven by the subdominant MTM that is unaffected by β^' stabilisation. The residual saturated MTM turbulence drives a modest level of predominantly electron heat transport carried by magnetic flutter. In addition, microtearing turbulence from the subdominant MTM has been modelled in isolation by artificially (and unphysically) suppressing hybrid-KBMs through the neglect of δ B_∥ fluctuations. The MTM-driven heat flux was found to saturate at negligible transport levels on the chosen surface in STEP-EC-HD. Nevertheless, MTM turbulence could be more relevant for transport under different local equilibrium conditions, e.g. when the hybrid-KBM is suppressed. In conclusion, local gyrokinetic simulations suggest that if flow shear and β' stabilisation are sufficient, a transport steady state may exist for a local equilibrium in the vicinity of the q=3.5 surface of the STEP-EC-HD flat-top operating point. We have not, however, sought or found a viable route to access such a burning flat-top on a path that avoids unacceptably large fluxes, e.g. due to hybrid-KBM turbulence at lower β'. The existence of high β local equilibrium regimes with promising transport properties provides a very strong incentive for more detailed future studies over STEP-relevant parameter space, focussing on two main fronts. Firstly, motivated by our results in sec:hybrid KBM instability and sec: equilibrium flow shear, we intend to explore higher fidelity GK simulations to assess the validity of the local GK approximation at the onset of large transport fluxes from the hybrid-KBM. In parallel, local gyrokinetics should be used to better understand the threshold through more extensive detailed investigations of regimes where hybrid-KBMs saturate at modest transport fluxes. While the simple scans of sec: sensitivity to local parameters provide useful insights, more thorough sensitivity studies are required that take into account the broader accessible operation space. This work will make essential contributions towards the development of higher fidelity reduced transport models for electromagnetic turbulence that are urgently needed for integrated modelling. § ACKNOWLEDGEMENTS The authors would like to thank E. Belli and J. Candy for their very helpful support with CGYRO simulations as well as T. Görler and D. Told for their help with GENE. The authors would also like to thank B. Chapman, D. Hatch, P. Ivanov, and M. Hardman for helpful discussions and suggestions at various stages of this project. D. Kennedy is grateful to The Institute for Fusion Studies (IFS), Austin TX, for its splendid hospitality during a stimulating and productive visit at the beginning of this work. This work has been funded by the Engineering and Physical Sciences Research Council (grant numbers EP/R034737/1 and EP/W006839/1). Simulations have been performed on the ARCHER2 UK National Supercomputing Service under the project e607 and on the Marconi National Supercomputing Consortium CINECA (Italy) under the projects STETGMTM and QLTURB. Part of this work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (<www.csd3.cam.ac.uk>), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (<www.dirac.ac.uk>). § WAVENUMBER REMAPPING IMPLEMENTATION OF FLOW SHEAR The default and recommended treatment of equilibrium flow shear in CGYRO is to use the spectral method, which we have used in the main text. We note, however, that a wavenumber remapping method is also implemented in CGYRO, which is similar to the method used in GENE but excludes the shearing contribution to the nonlinear term <cit.> (included in GENE). We briefly discuss here the impact of the shearing contribution to the nonlinear term on the saturated heat and particle flux values in STEP simulations. This comparison is shown in Figure <ref> at three different values of flow shear. When the nonlinear correction is considered in GENE (the default), the heat flux predicted by GENE is lower than the CGYRO predictions (with the wavenumber implementation of the flow shear) at all the γ_E values considered here. On the other hand, a very good agreement between CGYRO (with the wavenumber implementation of the flow shear) and GENE fluxes is observed in Figure <ref> when the nonlinear shearing term is manually removed from the GENE implementation of the flow shear. We note that, in this last case, the wavenumber remapping implementation of the flow shear is identical in CGYRO and GENE. In this STEP flat-top operating point, the wavenumber remapping implementation overestimates particle and heat fluxes if the nonlinear correction of <cit.> is removed. § SATURATION MECHANISMS OF THE MTM INSTABILITY Previous theories and simulations of microtearing turbulence have reported various saturation processes through energy transfer to long and short wavelengths <cit.>, background shear flow <cit.>, zonal fields <cit.>, electron temperature flattening <cit.>, and cross-scale interaction <cit.>. In the STEP MTM simulations considered here, we find that electron temperature flattening and zonal fields play an important role in the saturation mechanism. As the electrons move swiftly along the perturbed magnetic field lines associated with magnetic islands at the rational surfaces, they undergo a periodic radial excursion, leading to a flattening of the electron temperature. Given that the electron temperature gradient provides the drive for the MTM instability, this flattening can locally stabilise the mode. We test whether this occurs in our simulations by removing the zonal electron temperature perturbations that cause the local temperature flattening, i.e. by redefining the zonal component of the electron distribution function as ⟨δ f_e^mod⟩_y, θ = ⟨δ f_e⟩_y, θ - K(r)[m_ev^2/(2T_e)-1.5]⟨ F_e0⟩_y, θ, where ⟨·⟩_y, θ denotes the flux-surface average, F_e0 is the electron background Maxwellian distribution function and K(r) is a function of the radial coordinate only, which is set at each time step such that ⟨δ T_e⟩_y,θ=0 <cit.>. The resulting simulation, shown in Figure <ref>, returns a much larger heat flux than the reference case. Zonal fields can also provide a strong saturation mechanism for MTM turbulence by reducing the amplitude of non-zonal δ A_∥ modes and the subsequent magnetic stochasticity. We also perform a nonlinear test simulation where the zonal fields are removed. As shown in Figure <ref>, this test simulation also returns much larger heat flux than the nominal simulation. Interestingly, the time trace of the heat flux from the two test simulations agrees very well, thus suggesting a competition (or a link) between zonal fields and local temperature flattening as saturation mechanisms. An additional simulation test also in Fig. <ref> shows that zonal flows are not relevant for MTM turbulence saturation, at least in the case considered here. § REFERENCES unsrt
http://arxiv.org/abs/2307.00311v1
20230701115454
Gas phase Elemental abundances in Molecular cloudS (GEMS) VIII. Unlocking the CS chemistry: the CH + S$\rightarrow$ CS + H and C$_2$ + S$\rightarrow$ CS + C reactions
[ "Carlos M. R. Rocha", "Octavio Roncero", "Niyazi Bulut", "Piotr Zuchowski", "David Navarro-Almaida", "Asuncion Fuente", "Valentine Wakelam", "Jean-Christophe Loison", "Evelyne Roueff", "Javier R. Goicoechea", "Gisela Esplugues", "Leire Beitia-Antero", "Paola Caselli", "Valerio Lattanzi", "Jaime Pineda", "Romane Le Gal", "Marina Rodriguez-Baras", "Pablo Riviere-Marichalar" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
aa VIII. Unlocking the CS chemistry: the CH + S→ CS + H and C_2 + S→ CS + C reactions Rocha et al CS formation Laboratory for Astrophysics, Leiden Observatory, Leiden University, PO Box 9513, 2300-RA, Leiden, The Netherlands Instituto de Física Fundamental (IFF-CSIC), C.S.I.C., Serrano 123, 28006 Madrid, Spain University of Firat, Department of Physics, 23169 Elazig, Turkey Institute of Physics, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University in Torun, Grudziadzka 5, 87-100 Torun, Poland Université Paris-Saclay, CEA, AIM, Dèpartement d'Astrophysique (DAp), F-91191 Gif-sur-Yvette, France Observatorio Astronómico Nacional (IGN), c/ Alfonso XII 3, 28014 Madrid, Spain. Laboratoire d'astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France Institut des Sciences Moléculaires (ISM), CNRS, Univ. Bordeaux, 351 cours de la Libération, F-33400, Talence, France Sorbonne Université, Observatoire de Paris, Univerité PSL, CNRS, LERMA, 92190 Meudon, France Centre for Astrochemical Studies, Max-Planck-Institute for Extraterrestrial Physics,Giessenbachstrasse 1, 85748, Garching, Germany Institut de Planétologie et d'Astrophysique de Grenoble (IPAG), Université Grenoble Alpes, CNRS, F-38000 Grenoble, France Institut de Radioastronomie Millimétrique (IRAM), 300 Rue de la Piscine, F-38406 Saint-Martin d'Hères, France Carbon monosulphide (CS) is among a few sulphur-bearing species that has been widely observed in all environments, including the most extreme ones such as diffuse clouds. Moreover, it has been widely used as a tracer of the gas density in the interstellar medium in our Galaxy and external galaxies. Therefore, the full understanding of its chemistry in all environments is of paramount importance for the study of the interstellar matter. Our group is revising the rates of the main formation and destruction mechanisms of CS. In particular, we focus on those which involve open-shell species for which the classical capture model might not be accurate enough. In this paper, we revise the rates of reactions CH + S → CS + H and C_2 + S → CS + C. These reactions are important CS formation routes in some environments such as dark and diffuse warm gas. We performed ab initio calculations to characterize the main features of all the electronic states correlating to the open shell reactants. For CH+S we have calculated the full potential energy surfaces (PES) for the lowest doublet states and the reaction rate constant with a quasi-classical method. For C_2+S, the reaction can only take place through the three lower triplet states, which all present deep insertion wells. A detailed study of the long-range interactions for these triplet states allowed to apply a statistic adiabatic method to determine the rate constants. Our detailed theoretical study of the CH + S → CS + H reaction shows that its rate is nearly independent on the temperature in a range of 10- 500 K with an almost constant value of 5.5 × 10^-11 cm^3 s^-1 at temperatures above 100 K. This is a factor ∼ 2-3 lower than the value obtained with the capture model. The rate of the reaction C_2 + S → CS + C does depend on the temperature taking values close to 2.0 × 10^-10 cm^3 s^-1 at low temperatures and increasing to ∼ 5.0 × 10^-10 cm^3 s^-1 for temperatures higher than 200 K. In this case, our detailed modeling taking into account the electronic and spin states provides a rate higher than the one currently used by factor of ∼2. These reactions were selected for involving open-shell species with many degenerate electronic states, and, unexpectedly, the results obtained in the present detailed calculations provide values which differ a factor of ∼2-3 from the simpler classical capture method. We have updated the sulphur network with these new rates and compare our results in the prototypical case of TMC1 (CP). We find a reasonable agreement between model predictions and observations with a sulphur depletion factor of 20 relative to the sulphur cosmic abundance. However, it is not possible to fit all sulphur-bearing molecules better than a factor of 10 at the same chemical time. Gas phase Elemental abundances in Molecular cloudS (GEMS) Carlos M. R. Rocha1 Octavio Roncero2 Niyazi Bulut3 Piotr Zuchowski4 David Navarro-Almaida5 Asunción Fuente6 Valentine Wakelam7 Jean-Christophe Loison8 Evelyne Roueff9 Javier R. Goicoechea2 Gisela Esplugues6 Leire Beitia-Antero6 Paola Caselli10 Valerio Lattanzi10 Jaime Pineda10 Romane Le Gal11,12 Marina Rodríguez-Baras6 Pablo Riviere-Marichalar6 =========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Astrochemistry has become a necessary tool for understanding the interstellar medium of our Galaxy and external galaxies. Nowadays, we are aware of the existence of nearly 300 molecules in the interstellar and circumstellar medium, as well as of around 70 molecules in external galaxies (for a complete list, see the Cologne Database for Molecular Spectroscopy[https://cdms.astro.uni-koeln.de/]). Although the sulphur cosmic elemental abundance is only ten times lower than that of carbon (S/H≈1.5 × 10^-5), only 33 out of the currently detected interstellar molecules contain sulphur atoms. This apparent lack of chemical diversity in astrophysical sulphur-bearing molecules is the consequence of a greater problem in astrochemistry: there is an unexpected paucity of sulphur-bearing species in dense molecular clouds and star-forming regions. In such dense regions, the sum of the observed gas-phase abundances of sulphur-bearing species (the most abundant are SO, SO_2, H_2S, CS, HCS^+, H_2CS, C_2S, C_3S, and NS ) constitutes only <1% of the expected amount <cit.> . One could think that most of the sulphur is locked on the icy grain mantles, but a similar trend is encountered within the solid phase, where s-OCS ("s-" indicates that the molecule is in the solid phase) and s-SO_2 are the only sulphur-bearing species detected thus far <cit.>, and only upper limits to the s-H_2S abundance have been derived <cit.>. Recent observations with James Webb Space Telescope (JWST) did not detect s-H_2S either <cit.>. According to these data, the abundances of the observed icy species account for < 5% of the total expected sulphur abundance. This means that 94% of the sulphur is missing in our counting. It has been suggested that this missing sulphur may be locked in hitherto undetected reservoirs in gas and icy grain mantles, or as refractory material <cit.>. In particular, laboratory experiments and theoretical work shows that sulphur allotropes, such as S_8, could be an important refractory reservoir <cit.>. Gas phase Elemental abundances in Molecular CloudS (GEMS) is an IRAM 30m Large Program designed to estimate the S, C, N, and O depletions and the gas ionization fraction as a function of visual extinction in a selected set of prototypical star-forming filaments in low-mass (Taurus), intermediate-mass (Perseus), and high-mass (Orion) star forming regions <cit.>. Determining sulphur depletion is probably the most challenging goal of this project. The direct observation of the potential main sulphur reservoirs (s-H_2S, s-OCS, gas-phase atomic S) remains difficult even in the JWST era. Therefore, sulphur elemental abundance needs to be estimated by comparing the observed abundances of rarer species such as CS, SO, HCS^+, H_2S, SO_2, and H_2CS, with the predictions of complex gas-grain chemical models. All this causes that the sulphur chemistry in cold dark clouds remains as a puzzling problem. The development of accurate and complete sulphur chemical networks is therefore a requisite to disentangle the sulphur elemental abundance. Within the context of GEMS project, we have carried out a large theoretical effort to improve the accuracy of key reaction rates of the sulphur chemical network, with a special interest in those associated with the formation and destruction paths of SO and CS which are the gas-phase sulphur bearing species observed in more different environments. We estimated the rates of the reactions S + O_2 → SO + O <cit.> and SO + OH → SO_2 + H <cit.> at the low temperatures prevailing in dark clouds. These reactions drives the SO chemistry in these cold environments. <cit.> estimated the rate constant of the CS + O → SO + O reaction, that had been proposed as efficient CS destruction mechanisms in molecular clouds. In this work we shall study two formation reactions of CS, which are thought to be important in regions with low ionization fraction: CH(^2Π)+S(^3P) and C_2(^1Σ_g^+)+ S(^3P). In the two cases, the two reactants are radicals presenting several degenerate or quasi-degenerate electronic states: for CH + S there are 36 degenerate states and for C_2+S the first excited C_2(^3Π_u) states are only 0.089 eV above the C_2(^1Σ_g^+) ground state . This makes the experimental determination of their rates difficult, because of the low densities in which two radical species are obtained, and because of the possibility of self-reactions. Thus, the most accurate theoretical determination of the reaction rate constants is desirable to proper bound the abundance of CS in chemical models. The reactions rates currently available for these two reactions were obtained with a classical capture method <cit.>. Dealing with open-shell systems, there are several degenerate electronic states for the reactants, not all of them leading to the desired product, and the crossing among them originate barriers. All these effects are ignored in the classical capture method. Precise calculations like the ones in this article are needed to improve the accuracy of our chemical networks and to estimate the uncertainties associated with the less demanding classical capture methods. § CH+S REACTION The reaction CH(^2Π)+S(^3P) → CS(X^1Σ^+,a^3Π) + H (a) → SH(X^2Π) + C(^3P) (b) presents several rearrangement channels (CS and SH products) with several electronic states in each case. There are 36 degenerate states (neglecting spin-orbit couplings) in the CH(^2Π)+S(^3P) entrance channel. The same number of degenerate states are in the SH(X^2Π) + C(^3P) rearrangement channel, which is only ≈ 0.08 eV below CH(^2Π)+S(^3P). The reaction SH(X^2Π) + C(^3P)→ CS(X^1Σ^+,a^3Π) + H has already been studied theoretically for the ground ^2A' state <cit.> and for the excited ^2A” state <cit.>. The 36 degenerate electronic states (neglecting spin-orbit couplings) involved in Eq. (<ref>.a) consist of 6 spin states times 6 orbital states. The spin states are quartet and doublets, while the orbital states are splitted in A' and A” states, i.e., symmetric or antisymmetric with respect to the inversion through the plane of the molecule. Of all these 36 states, only a doublet (i.e. 2 states) correlates to the CS(X^1Σ^+) states, which correspond to the ground adiabatic ^2A' states. Reaction  (<ref>.a) towards the X^1Σ^+ is exothermic by ≈ 3.5 eV. However, the excited CS(a^3Π) is about 3.42 eV above CS(X^1Σ^+), and the reaction is nearly thermoneutral. §.§ Ab initio calculations To describe the electronic correlation along the reaction, we use here the internally contracted multi-reference configuration interaction (ic-MRCI) method <cit.> including the Davidson correction (hereafter called MRCI+Q) <cit.> and the calculations are performed with the MOLPRO suite of programs <cit.>. Three electronic states are calculated for each symmetry, 3 ^2A' and 3 ^2A”, and the same for the quartet states. In these calculations, the molecular orbitals are optimized using a state-averaged complete active space self-consistent field (SA-CASSCF) method, with an active space of 10 orbitals (7 and 3 of a' and a” symmetry, respectively). Five ^2,4A' and four ^2,4A” electronic states are calculated and simultaneously optimized at CASSCF level, using a dynamical weighting factor of 10. In all these calculations the aug-cc-pVTZ (aVTZ) basis set is used <cit.>. For the ic-MRCI calculations, 6 orbitals are kept doubly occupied, giving rise to ≈ 5 × 10^6 (382 × 10^6) contracted (uncontracted) configurations. Checks made with the aug-cc-pVQZ basis gave nearly parallel results along some of the minimum energy paths (MEPs) shown below. For this reason, we kept the aVTZ basis set to build the PES. Nevertheless, to better describe the long range part, we shall use a AV5Z basis, as discussed below. In Fig. <ref> the first 5 A' and 4 A” states for doublet (bottom) and quartet (top) multiplicities are shown, calculated at CASSCF level. With no spin-orbit couplings, the six electronic states for each multiplicity are degenerate for long distances between S(^3P) and CH(X^2Π), and this asymptote is taken as the origin of energy. When they approach, they cross with the excited states correlating to S(^3P)+CH(a^4Σ^-). After the crossing, there are three (two) curves for the doublet (quartet) states that become negative. For the doublets, one of these states correlates to CS(X^1Σ^+) and two, nearly degenerate, to CS(a^3Π_r) states. This degeneracy is recovered in the MRCI+Q calculations shown in Fig. <ref>. The two degenerate states, 1^2A' and 1^2A” (and 1^4A' and 1^4A” for quartets), correspond to the HCS(^2Π) radical (and the excited HCS(^4Π)) state, which experience Renner-Teller effects, as studied by <cit.>. In the case of doublets, the X^2A' state (of ^2Π character at collinear geometry) crosses with 2^2A' (of ^2Σ character at collinear geometry), which correlate with the CS(X^1Σ^+) state of the products. The crossing at collinear geometry is a conical intersection, between Σ and Π states. As long as the system bents, there are couplings and the crossing is avoided. As a consequence, the ground X^2A' state correlates to the CS(X^1Σ^+) products, i.e. only these 2 doublets among the 36 states correlating to the CH(^2Π)+S(^3P) reactants. Therefore, only the ground electronic state is needed to describe the reaction <ref>. The energy difference between the CH(^2Π)+S(^3P) and CS(X^1Σ^+) + H, at their corresponding equilibrium geometries is D_e = 3.52 eV, and D_0 = 3.62 eV when including zero-point energy (using the fit described below). Since there is no direct measurements for this reaction, the experimental exothermicity is estimated from the dissociation energies D_0=D^CH_0 - D^CS_0 = 3.89 eV, about a 10% higher than the present results using the values reported by <cit.>. §.§ Long range interaction MRCI calculations are not size consistent and the Davidson correction (+Q) is not adequate to describe close lying electronic states. Therefore MRCI+Q method introduces inaccuracies in the long range region, which need to be described accurately to obtain good rate constants at low temperatures. Coupled Cluster methods are size consistent and are typically considered as a holy grail to describe long range interactions, but in the presence of two open shell reactants, is not expected to yield good accuracy either. As an alternative, here we use the CASSCF method with a larger aV5Z basis set <cit.>. Several points have been calculated in Jacobi coordinates, for CH(r_CH) = 1.1199 Å and R= 10-50 Å where R is the distance of CH center-of-mass and the S atom, as a function of the angle θ, with cosθ= R· r_CH/ R r_CH. At long distances, the system behaves as dipole-quadrupole interaction, with the dipole of CH interacting with the quadrupole of a P sulphur atom, whose analytical form is <cit.> V_LR(R,θ)= M_QD(θ) Q_A d_B R^4 + M_QQ(θ) Q_A Q_B R^5, with M_QD(θ) and M_QQ(θ) being 3×3 matrices, depending on Legendre polynomials <cit.>. The eigenvalues of this matrix properly describe the angular dependence of the adiabatic states. Thus, only 2 effective parameters are needed to fit the ab initio points, Q_A d_B = 0.293 hartree Å^-4 and Q_A Q_B = 0.092 hartree Å^-5. Two families of three states are considered, separately, 1^2A”, 1^2A', 2^2A” and 2^2A', 3 ^2A',3 ^2A”, corresponding to the two Π states of CH interacting with the 3 P states of sulphur. The excellent agreement between calculated and fitted analytical expressions, as shown in Fig. <ref>, demonstrates the adequacy of the analytical fit in the asymptotic region. §.§ Analytical fit of the ground state The analytical representation of the potential energy surface of the ground 1^2A' electronic state is described by two terms V(r_CH,r_CS,r_SH)= E_g^FF(r_CH,r_CS,r_SH) + V^3B(r_CH,r_CS,r_SH), where V^3B is the three-body term added to the zero order description provided by the lowest root, E_g^FF(r_CH,r_CS,r_SH), of the 3×3 reactive force-field matrix defined as <cit.> H^FF =([ V_CH + W^1_CS + W^1_SH+V_LR V_12 V_13; V_12 V_CS + W^2_CH + W^2_SH V_23; V_13 V_23 V_SH + W^3_CH + W^3_CS ]). The diagonal terms describe each of the rearrangement channels, in which V_CH(r_CH), V_CS(r_CS) and V_SH(r_SH), are fitted using the diatomic terms of <cit.>. In the reactants channel 1, the long-range term given by Eq. <ref> is included. W_AB are Morse potentials whose parameters are determined to describe each channel independently. Finally, the non-diagonal terms V_ij essentially are build as Gaussian functions exp(-α (H^FF_ii -H^FF_jj)^2) depending on the energy difference between the diagonal terms of the corresponding force-field matrix. The parameter α is determined to fit the transition between rearrangements. The three-body term, V^3B in Eq. (<ref>), is described with the method of <cit.> using a modification of the program GFIT3C <cit.>. In the fit, about 7500 ab initio points, calculated at MRCI+Q level, have been included. These points are mainly composed by a grid of 22 points for 0.6 ≤ r_CH≤ 8 Å, 27 points for 1≤ r_CS≤ 8 Å, and 19 points in the θ_HCS angle, in interval of 10^o. The points have been weighted, with a weight of 1, for those with energy up to 1 eV, from the entrance CH+S channel at 50 Å, and with a Gaussian function for higher energies, and considering a minimum weight of 10^-4. The final fit uses polynomials up to order 10, with an overall root-mean-square error of 0.07 eV. The main features of the potential fit is shown in Fig. <ref>, where the contour plots for the reactants (bottom panel, for R_CH=1.1199 Å) and products (top panel, for R_CS=1.568 Å, and shifted by 3.526 eV, the exoergicity of the reaction) are shown. The CH+S reactant channel (bottom panel) is attractive for R_CS < 7 Å, leading to the products channel at R_CS≈ 1.5 Å, with an energy of -5.5 eV. These energies correspond to the CS-H well in products channel, which is about 2 eV below the CS + H products, shown in the top panel of Fig. <ref>, shifted 3.526 eV to show the details. At R_CH≈ 2Å, there is a barrier which arises from the curve crossing discussed above. §.§ Quasi-Classical versus Quantum wave packet dynamics in the 1^2A' state To check the validity of the quasi-classical method, we first compare quantum and quasi-classical trajectory (QCT) calculations for total angular momentum J=0. The quantum wave packet (WP) calculations are performed with the MADWAVE3 code <cit.> and the parameters used are listed in Table  <ref>. The WP method is considered numerically exact, but it is very demanding computationally. The QCT calculations are performed with the MDwQT code <cit.>. Initial conditions are sampled with the usual Monte Carlo method <cit.>. In this first set of calculations CH is in its ground vibrational (v) and rotational state, (v,j)=(0,0), and the initial internuclear distance and velocity distributions are obtained with the adiabatic switching method <cit.>. An impact parameter b=0 is considered, that corresponds to J=0. The initial distance between sulfur and the CH center of mass is set to 50 Bohr, and the trajectories are stopped when any internuclear distance is longer than 60 Bohr. For each energy N_tot= 10^4 trajectories are run to calculate the reaction probability as P_R(E)= N_r /N_tot, where N_r are the number of reactive trajectories. The WP and QCT reactions probabilities are compared in Fig. <ref>, and the two show methods show a very similar behavior: a reaction probability slightly larger than 0.9 at energies below 0.01 eV, decreasing as a function of collision energy, down to a probability lower than 0.2 at 1 eV. In the two cases there are oscillations, which do not match perfectly but that show similar envelopes. Since these oscillations are expected to wash out when considering the partial wave summation over total angular momentum, J, we consider this agreement as satisfactory. This leads us to conclude, that QCT method is sufficiently accurate to determine the reaction rate constant, as described below. The reaction rate constant for CH(v=0 and 1) in the 1^2A' electronic state is evaluated according to K_v^1^2A'(T) = √(8 k_B T/πμ)π b_max^2(T)P_r(T) , where b_max(T) and P_r(T) are the maximum impact parameter and reaction probability at constant temperature, respectively. In this case, about 10^5 trajectories are run for each temperature, the same for translation and rotation degrees of freedom, fixing the vibrational state of CH to v=0 or 1. §.§ Thermal rate Considering that only the double degenerate 1^2A' electronic state reacts to form CS(X^1Σ^+), the electronic partition function has to be considered. Neglecting the spin-orbit splitting, the electronic partition function would be 2/36, i.e., the thermal rate constant is about 1/18 the reaction rate, K^1^2A', associated to the 1^2A' state. Including the spin-orbit splitting of S(^3P_J_S) sulphur atom, and assuming that only the lowest two spin-orbit react (having an individual rate constant equal to K^1^2A'), the vibrational selected thermal rate constant is given by K_v(T) = 2 K_v^1^2A'(T) 4[ 5 + 3 exp(-539.83/T) + exp(-825.34/T)], and it is shown in Fig. <ref>. In this figure the red line corresponds to the rate constant (1.4 × 10^-10 cm^3s^-1) obtained from the KIDA data base, as obtained with a classical capture model <cit.>, using analytical formulas <cit.> with dipole moments and polarizabilities taken from the literature or calculated using Density Functional theory. The K_v=0^1^2A'(T)/18 rate constant at 10 K is about 4 × 10^-11 cm^3/s, increasing to a nearly constant value of ≈ 5.5 × 10^-11 cm^3/s at temperatures above 100 K. This is a factor between 2 and 3 lower than the value obtained with the capture model <cit.>. When including the spin-orbit splitting, the rate K_v=0(T) is larger at 10 K, simply because the populations of the excited sulphur spin-orbit states, J_S= 1 and 0, are negligible. As temperature increases, their populations increase, leading to a reduction of the rate constant K_v=0(T), which decreases tending to K^1^2A'(T)/18 at high temperature. The rate with spin-orbit splittings, K_v=0(T), is only a factor ≈ 1/2 smaller than that of KIDA at 10 K, and is considered the most accurate obtained here. The K_v=0(T) rate constant has been fitted and the parameters are listed in Table <ref>. In Fig. <ref>, K_v=1^1^2A'(T)/18 is also shown. Its contribution at low temperatures is small, because the vibration energy of CH(v=1) is about 0.337 eV higher than v=0, i.e. 3916 K. Therefore, K_v=0(T) is a good approximation to the thermal rate constant, which includes the spin-orbit splitting. § C_2+S REACTION The reaction C_2(X^1Σ_g^+)+S(^3P)→CS(X^1Σ^+)+C(^3P) is exothermic by ∼1 <cit.>. Due to the open shell nature of the involved atoms [S(^3P) and C(^3P)], reactants collision and product formation can take place adiabatically on three triplet CCS potential energy surfaces (2^3A”+1^3A'); see, e.g., Figures <ref> and <ref>. To the best of our knowledge, no dedicated theoretical studies are yet available on this reaction. <cit.> reported a theoretical upper limit for its rate coefficient (k∼2e-10 cm^3 molecule^-1 s^-1 at 10) using classical capture rate theory. From an experimental viewpoint, presently available techniques for the production of reactant dicarbon often generate a mixture of both C_2(X^1Σ_g^+) and C_2(a^3Π_u) <cit.> (recall that this electronically excited state lies only ∼716 cm^-1 above the ground X form) which, together with the expected high reactivity of C and S atoms, make the laboratory characterization of this specific reaction (<ref>) extremely cumbersome. §.§ Ab initio calculations The methodology employed to obtain optimized energy paths for the C_2+S→CS+C reaction closely resembles the one utilized for the CH+S system. Preliminary PES explorations and geometry optimizations were all performed at the SA-CASSCF level of theory, followed by single-point MRCI+Q calculations. The CASSCF active space involves a total of 14 correlated electrons in 12 active orbitals (9a'+3a”). For each multiplicity considered (triplet, singlet and quintet), five A” and four A' electronic states were simultaneously treated in the SA-CASSCF wave functions. The aVXZ (X=T,Q) basis sets of Dunning and co-workers <cit.> were employed throughout, with the calculations done with MOLPRO. Our calculated CASSCF/aVTZ optimized path for reaction (<ref>) is shown at the bottom panel of Figure <ref>. As seen, the C_2(X^1Σ_g^+)+S(^3P) reactants collision involves only triplet CCS PESs and can happen on two ^3A” and one ^3A' electronic states. Proceeding through the ground-state PES of CCS(1^3A”), reaction (<ref>) does not encounter any activation barriers for collinear atom-diatom approaches, being exothermic by ∼0.6 at CASSCF/aVTZ level. Note that this process occurs via the formation of a strongly-bound intermediate complex corresponding to the linear global minimum of CCS, ℓ-CCS(X^3Σ^-) <cit.>; from this structure, the ground-state CS(X^1Σ^+)+C(^3P) products can be directly accessed, without an exit barrier. A close look at Figure <ref> (bottom panel) also reveals that the excited 2^3A” and 1^3A' electronic states are degenerate along C_∞ v atom-diatom collisions. These PESs form the Renner-Teller components of the strongly-bound ℓ-CCS(A^3Π) complex, showing a conical intersection with ℓ-CCS(X^3Σ^-) at R_CS-R_CC≈+0.5 Å <cit.>. Differently from the ground 1^3A” state, the conversion from reactants to products as proceeding adiabatically through the 2^3A” and 1^3A' PESs entails a large activation barrier (≈1 at CASSCF/aVTZ level) which is located at R_CS-R_CC≈-0.5 Å; see Figure <ref>. As shown, this region of the nuclear configuration space is extremely congested by the existence of several low-lying excited triplet states correlating with C_2(a^3Π_u)+S(^3P). Note that the C_2(a^3Π_u)+S(^3P) reactants can approach each other in six triplet (3^3A”+3^3A'), six singlet (3^1A”+3^1A') and six quintet (3^5A”+3^5A') electronic states. For completeness, their corresponding optimized reaction paths towards CS+C formation are also plotted in Figure <ref>; see bottom, middle and top panels therein. Accordingly, when proceeding adiabatically, the reactions involving C_2(a^3Π_u)+S(^3P) are all endothermic, leading ultimately to excited-state CS+C products. So, they are expected to be highly inefficient at the low temperature regimes here envisaged (unless non-adiabatic transitions play a role) and, for this reason, will not be considered further in this work. Keeping now our focus on the target C_2(X^1Σ_g^+)+S(^3P)→CS(X^1Σ^+)+C(^3P) process [Reaction  (<ref>)] and to better estimate its overall attributes, we have performed MRCI+Q/aVQZ//CASSCF/aVQZ calculations along the underlying reaction paths on both 1^3A”, 2^3A” and 1^3A' electronic states. The results are plotted in Figure <ref>. Accordingly, at this level of theory, our best estimate for the exothermicity of reaction (<ref>) is 1.05 (without zero-point energy), a value that matches nearly perfectly the corresponding experimental estimate of 1.04 <cit.>. The stabilization energies of the ℓ-CCS(X^3Σ^-) and ℓ-CCS(A^3Π) complexes are herein predicted to be -5.3 and -4.2, respectively, relative to the C_2(X^1Σ_g^+)+S(^3P) reactant channel. Most notably, Figure <ref> (bottom panel) shows that, at MRCI+Q/aVQZ//CASSCF/aVQZ level, the predicted activation barriers along linear C_∞ v paths for the 2^3A” and 1^3A' states are largely reduced with respect to CASSCF/aVTZ values (Figure <ref>; bottom panel), going from ≈1 to less than 0.1. Indeed, by allowing the valence C–C–S angle (γ_CCS) to also be freely optimized in the MRCI+Q/aVQZ//CASSCF/aVQZ calculations, Figure <ref> (top panel) unequivocally pinpoints that such barriers actually become submerged, thence lying below the corresponding C_2(X^1Σ_g^+)+S(^3P) reactant channel; note therein the existence of small discontinuities on the ab initio curves which are associated with abrupt changes in γ_CCS near the top of these barriers. This clearly indicates that all such PESs (1^3A”, 2^3A” and 1^3A') contribute to the overall dynamics/kinetics of reaction (<ref>), even at low temperatures. In the following, we describe the methodology employed to obtain rate coefficients for this target reaction. §.§ Long range interactions Restricting the calculations to the equilibrium distance of C_2(^1Σ_g^+), allow us to consider C_2(^1Σ_g^+) and C_2(^3Π) separately. Under this approximation, the situation simplifies to a closed shell diatom plus an open shell atom for the two asymptotes: C_2+S and CS+C. For both cases let us adopt the Jacobi coordinate system and expansion in terms of orthogonal functions as defined eg. in <cit.>: V(R,θ,ϕ,θ_a,ϕ_a)= ∑_λλ_a μ V_λλ_a μ(R) C_λμ(θ ,ϕ) C_λ_a, -μ (θ_a ϕ_a) where C_λμ(θ, ϕ) = (4 π/2λ +1)^1/2 Y_λμ(θ, ϕ) are spherical harmonics in Racah normalization. The θ,ϕ angles correspond to the orientation of diatomic molecules with respect to vector connecting the center-of mass of the diatomic molecule with the atom in a Jacobi frame, while θ_a,ϕ_a angles to the orientation of doubly occupied p orbital of S and C atom, respectively. The R is the distance between the COM of diatom and atom. For both systems in C_s symmetry the solution of electronic Schrödinger equation is hard, since one cannot use single-reference methods since two solutions will always belong to the same irreducible representation, and are nearly degenerate. The situation is simpler in case of symmetric configurations: C_∞ v (linear) - for both systems, and C_2v (T-shaped) for C_2+S. For these cases atom+diatom states belong to distinct irreducible representations: for linear geometry Π and Σ^- states, while for C_2v these symmetries are B_1, B_2 and A_2. For these particular configurations of the CCS system one can use single-reference gold-standard CCSD(T) method for calculations of interaction energy in each symmetry <cit.>. Using linear and T-shape geometries we have calculated UCCSD(T) interaction energies in the range of 6-30 Å and converted it to V_λλ_r μ(R) potentials using the following formula for C_2+S V_000 = (2V_Π + V_Σ + 2V_B_1 + 2V_B_2 + 2 V_A_2)/9 V_020 = -2( V_Π - V_Σ + V_B_1 + V_B_2 - 2 V_A_2)/9 V_200 = 2(2V_Π + V_Σ - 2V_B_1 - V_B_2 - V_A_2)/9 V_220 = -2(2V_Π - 2 V_Σ - V_B_1 - V_B_2 + 2 V_A_2)/9 and CS+C V_000 = (2V_Π,0 + V_Σ,0 + 2V_Π,180 + V_Σ,180 )/6 V_100 = (2V_Π,0 + V_Σ,0 - (2V_Π,180 + V_Σ,180 ))/6 V_020 = ( V_Σ,0 - V_Π,0 + V_Σ,180 - V_Π,180 )/3 V_120 = ( V_Σ,0 - V_Π,0 - (V_Σ,180 - V_Π,180 ))/3 where in the latter equation V_Π,0/V_Σ,0 denote potential for appropriate symmetry in C-S-C configuration and V_Π,180/V_Σ,180 in C-C-S alignment. The above equations can be obtained by calculating Eq. <ref> for θ,ϕ,θ_a,ϕ_a corresponding to symmetric configurations <cit.>. Then, the V_λλ_r μ(R) potentials were carefully fitted to analytical form to get inverse power expansion form. It is important to realize, that collision-induced rotations of CS molecules are driven directly by V_100 and V_120 terms, while for C_2 molecules colliding with atoms such terms are V_200 and V_220. When the atoms are assumed to be spherically symmetric, the terms V_120 and V_220 can be ignored. Thus, from now on we will skip the dependence on θ_a,ϕ_a in the Eq. <ref>. Thus our model of the potential used for the statistical method includes only isotropic term V_000 and leading anisotropies V_100 and V_200 which were fitted to analytical forms of van der Waals expansion: ∑_i=0^3 C_6+i R^-(6+i) for V_000 and V_200, and ∑_i=0^3 C_7+i R^-(7+i) for V_100. These potentials can be viewed as averaged over all orientations of P-state atoms. Moreover, for the leading coefficients we also performed similar calculations using open-shell Symmetry Adapted Perturbation theory <cit.> and confirmed the values for the leading coefficients C_6 and C_7 of the reactants and products (they agreed to within 10%). The final analytical form of the potential used for C_2+S reads (in atomic units of distance and energy) V(R,θ) = A R^-6 +B R^-8 + C R^-10 + 1/2(3 cos^2(θ) -1) ( D R^-6 + E R^-8 + F R^-10 ) with A=-125.8 , B=-9.444× 10^3,C=1.743× 10^5, D=-13.66 , E=-2.848× 10^3 and F=- 1.929× 10^5 while for CS+C V(R,θ) = A R^-6 +B R^-8 + C R^-10 + D R^-12 + cos( θ) R^-7 ( E + F R^-2 + G R^-4 +H R^-6) with A= - 147.5, B=- 207.3, C=- 3.834× 10^6, D=-2.805 × 10^8, E=- 487.3, F=-4.309× 10^3, G=-2.394× 10^7 and H=- 1.997× 10^9. §.§ Rate constant calculations Due to the deep well appearing for the three lowest adiabatic states of C_2+ S reaction, and the large masses of the three atoms involved, this system has a high density of resonances near the thresholds. Therefore, this reaction is expected to be governed by a statistical mechanism. In this work we shall use the Adiabatic Statistical (AS) method <cit.>, using the AZTICC code recently implemented <cit.>. Here, we shall use the rigid rotor approach, similar to the method already used to treat several reactions and inelastic processes <cit.>. We consider the experimental exoergic values of D_0=D^C_2_0 - D^CS_0 = 1.14 eV <cit.>, which already includes the vibrational zero-point energy (ZPE) of reactants and products. This exothermicity is very close to that shown in the ab initio calculations, of ≈ 1. eV in Fig. <ref>, which does not include ZPE. In the rigid rotor approach, the diatomic distances for the diatomic molecules are frozen to their equilibrium values, r_e= 1.2425 and 1.568 Å for C_2 and CS, respectively. Since C_2 is homonuclear, the even and odd rotational channels are not coupled, and are treated independently. The large exothermicity requires the inclusion of vibrational levels for the CS products, up to v=15, to describe adequately the products density of states. To this end, the adiabatic potentials are generated for each final v, independently, using the rigid rotor approximation at r_eq with the same long-range potential but shifting the energy using the anharmonic vibration constants, ω_e= 0.15933 eV ω_eχ_e= 0.801 meV from <cit.>. All product ro-vibrational levels are then considered to obtain the square of the S-matrix at each total angular momentum J (see Ref.<cit.> for more details). The calculations are done using a body-fixed frame, in which the z-axis is parallel to the Jacobi vector R, joining the diatomic center-of-mass to the atom, and the three atoms are considered in the body-fixed xz-plane. In the present calculations, a maximum rotational quantum number j_max= 200 and 250 have been considered for C_2 and CS, and a maximum helicity quantum number of Ω_max= 15. The individual state-to-state reactive cross sections are obtained performing the summation over all J in the partial wave expression up to J_max= 200. Finally, integrating over the translational energy according to a Boltzmann energy distribution, and summing over all accessible states, the thermal rate constants, K_α(T), are obtained, with α= 1^3A”, 1^3A'and 1^3A'. The 2^3A” and 1^3A' state present a submerged barrier which could reduce the reactivity, but here we consider that the three α electronic states have the same reactive rate constant. For this reason, when the spin-orbit splitting of S(^3P_J) is included, the final thermal rate constant averaging over the spin-orbit states, K^AS(T), is equal to K_α(T), shown in Fig. <ref>, whose parameters are fitted and listed in Table <ref>. The C_2+ S → CS + S available in KIDA data base[https://kida.astrochem-tools.org/] is that of <cit.> obtained by a capture method, which corresponds to a value of K^C(T)= 2 10^-10 cm^3/s, independent of temperature, and was calculated using a capture method, i.e. assuming that the reaction is exothermic, but without considering the deep well described in this work. Such approach neglects the possibility that the C_2S complex formed under the statistical assumption can exit back to C_2 + S products. This probability is small because it is proportional to the density of states in each channel, and this density is much lower in the C_2+S due to the exothermicity, the homonuclear symmetry and the larger rotational constant. § DISCUSSION In this section we evaluate the impact of the new reaction rates in our understanding of interstellar chemistry. This is not straightforward since the formation and destruction routes of the different species depend on the local physical and chemical conditions as well as the chemical time. Therefore, we need to consider different environments to be able to have a comprehensive view of the impact of the new reactions rates on astrochemical calculations, and in particular in our ability to reproduce the abundance of CS. We performed chemical calculations using Nautilus 1.1 <cit.>, a three-phase model, in which gas, grain surface and grain mantle phases, and their interactions, are considered. We used the code upgraded as described by <cit.> with the chemical network of KIDA^2 that has been modified to account for the reaction rates estimated by <cit.>, <cit.>, and this paper (reaction rates in Table  <ref>). In Nautilus, desorption into the gas phase is only allowed for the surface species, considering both thermal and non-thermal mechanisms. In the regions where the temperature of grain particles is below the sublimation temperature, non-thermal desorption processes become important to calculate the number of molecules in gas phase. The latter include desorption induced by cosmic rays <cit.>, direct (UV field) and indirect (secondary UV field induced by the cosmic-ray flux) photo-desorption, and reactive chemical desorption <cit.>. In the following calculations, we use the prescription proposed by <cit.> for ice coated grains to calculate the reactive chemical desorption. The physical and chemical conditions associated with these three simulations are detailed below: * TMC 1: This case represents the physical and chemical conditions prevailing in molecular cloud complexes where low-mass stars are formed. The physical properties of these regions are typically described with moderate number densities of atomic hydrogen nuclei n_ H=3× 10^4 cm^-3, cold gas and dust temperatures T= 10 K, a moderate visual extinction A_V=20 mag, a cosmic-ray H_2 ionization rate of ζ_ H_2=10^-16 s^-1 <cit.>, and an intensity of the far-ultraviolet field (FUV) equal to χ=5 in Draine units. We assume that sulphur is depleted by a factor of 20 relative to cosmic abundance as derived by <cit.> and <cit.> in Taurus and Perseus. * Hot core: This case represents the physical and chemical conditions in the warm interior of young protostars where the gas and dust temperature is >100 K and the icy grains mantles are sublimated. Typical physical conditions in these regions are: n_ H = 3 × 10^6 cm^-3, T_k= 200 K, A_V= 20 mag, ζ_ H_2= 1.3 × 10^-17 s^-1, χ=1 in Draine units. Sulphur depletion in hot cores is not well established. We adopted [S/H]=8×10^-7 since this value is commonly used to model massive hot cores <cit.>. * Photon-dominated regions (PDRs) are those environments where the far-ultraviolet photons emitted by hot stars are determining the physical and chemical conditions of gas and dust. PDRs can be found on the surfaces of protoplanetary disks and molecular clouds, globules, planetary nebulae, and starburst galaxies. As representative of the physical and chemical conditions in the PDRs associated with massive star forming regions, we select: n_ H = 5 × 10^5 cm^-3, T_k= 100 K, A_V= 4 mag, ζ_H_2= 10^-16 s^-1, χ=10^4 in Draine units. The amount of sulphur in gas phase in PDRs is still an open question. Based on observations of sulphur recombination lines in the Orion Bar, <cit.> obtained that the abundance of sulphur should be close to the solar value in this prototypical PDR. We are aware that sulphur recombination lines are arising from PDR layers close to the S^+/S transition, at A_v∼ 4 mag <cit.>) and sulphur depletion might be higher towards more shielded regions from where the emission of most sulphur-bearing molecules comes. In spite of this, we adopt [S/H]=1.5×10^-5 for our calculations. Table <ref> shows the fractional abundances of CS, SO, and SO_2 predicted using the "old" and "new" chemical network and the physical conditions above described. In regions where the ionization fraction is large, CS is essentially produced from the electronic dissociative recombination of HCS^+, where HCS^+ is formed by reactions of S^+ and CH <cit.>. <cit.> showed that the production of HCS^+ with S^+ + CH → CS^+ + H, and CS^+ + H_2 → HCS^+ + H, is the most efficient HCS^+ formation route at the cloud surface. In more shielded regions where the sulphur is mainly in neutral atomic form, CS is also produced by neutral-neutral reactions, with significant contributions of the reactions studied in this paper. For dark cloud conditions, the reaction C_2 + S forms CS more efficiently than CH + S. At low temperatures, the calculated C_2 + S rate is slightly higher than previous value (see Fig. <ref>). However, its possible effect is canceled by the lower value of the new CH + S reaction rate (see Fig. <ref>). As a result, the impact of our new rates on the CS abundance for the TMC1 case is negligible. Not surprisingly, the major impact of the new reactions rates calculated on the CS abundance is observed for the hot core case with variations of ∼ 20% due to the significantly higher C_2 + S reaction rate at temperatures >100 K (see Fig. <ref>). The S + O_2 → SO + O and SO + OH → SO_2 + H reactions rates published by <cit.> and <cit.> produce the maximum variations in the case TMC1, with variations of the SO and SO_2 abundances of a factor of >2, which demonstrates the need of performing this type of calculations. TMC 1 (CP) is the astrophysical object for which the highest number of sulphur-bearing species have been detected so far, with more than ten complex sulphur-bearing molecules detected for the first time in the last 3 years (, Table <ref>). The large number of atoms in these new species (>5) shows that a rich and complex organo-sulphur chemistry is going on in this dark cloud<cit.>. Although these large molecules carry a small percentage of the sulphur budget, their detection is useful to test the predictive power of our chemical network. Table <ref> shows a compilation of the observed abundances towards TMC 1 (CP). In order to perform the most uniform and reliable comparison, the abundances have been re-calculated assuming N_ H=3.6×10^22 cm^-2 <cit.>. The abundances of HCS^+ and H^13CN were already estimated by <cit.>. Here, they have been re-calculated using the most recent collisional coefficients reported by <cit.> and <cit.>. Following the same methodology explained in <cit.> and <cit.>, we assumed X(HCN)/X(H^13CN)=60 to estimate the HCN abundance. We performed chemical calculations using Nautilus 1.1 and the chemical network modified to account for the reaction rates presented in this paper, <cit.>, and <cit.>, and the physical conditions derived by <cit.>.These physical conditions are the same as in the TMC1 case in Table <ref>. Only 17 of the total number of sulphur-bearing species detected towards TMC 1 (CP) are included in our chemical network. Fig. <ref> shows the chemical predictions for all the sulphur-bearing species that have been observed in this proto-typical source and are included in our chemical network. In addition, we show the CO, HCO^+, and HCN abundances because these molecules are considered good tracers of the gas ionization degree and C/O ratio <cit.>. We find a reasonable agreement (within a factor of 10) between model and observations for CO, HCO^+, HCN, CS, HCS^+, H_2S, SO, C_2S, C_3S, C_4S, H_2CS, HC_2S^+, HC_3S^+, H_2CCS, NS^+, HNCS, and HSCN for times between 0.1 Myr and 1 Myr. However, as already commented by <cit.> and <cit.>, the chemical time at which we find the best solutions depends on the considered species. The most important restrictions respect to the chemical time comes for CO and HCO^+ whose abundances rapidly decrease for times later than 0.4 Myr. On the opposite side, we find that the abundances of HCS^+, SO, and C_2S are better reproduced for times > 1 Myr. Only OCS, C_4S, and NS cannot be fitted with chemical times between 0.1 Myr and 1 Myr. We would like to recall that the chemical time is not the same as the dynamical time, since various physical phenomena such as turbulent motions that carry molecules to the cloud surfaces or shocks can reset the chemical age of the gas. sulphur-bearing species are very sensitive to the chemical time with several species whose abundances vary in several orders of magnitude from 0.1 Myr to 1 Myr. Therefore the chemical time is a critical parameter to fit them (see Fig.<ref>). In our 0D chemical calculations, the physical conditions remain fixed, which is far from the real case of a collapsing and fragmenting cloud. In forthcoming papers, we will explore the influence that the cloud dynamical evolution would have on the sulphur chemistry. § SUMMARY AND CONCLUSIONS The rate constants for the formation of CS(X^1Σ^+), CH(^2Π) + S(^3P) and C_2(X^1Σ_g^+)+ S(^3P) have been obtained in this work, and are tabulated in Table <ref>. These two reactions involve open shell reactants, and therefore present several degenerate, or nearly degenerate, electronic states. The role of each initial electronic state in the formation of CS has been analyzed in detail. For CH(^2Π) + S(^3P) it is found that only the 1^2A' can contribute to the CS(X^1Σ^+) formation through an exothermic barrierless mechanism, i.e. 2 states of the doublet among the 36 degenerate electronic states correlating to the CH(^2Π) + S(^3P) asymptote. When the spin-orbit splitting is taken into account, at the low temperatures of 10 K the electronic partition function becomes 2/9, tending to 2/36 at high temperature. Surprisingly, the rate constant obtained with a capture model <cit.> is only a factor of two higher than the present result at 10 K, and this difference increases very little with increasing temperature up to 500K. For the C_2(X^1Σ_g^+)+ S(^3P) reaction the three triply degenerate states connect to the CS(X^1Σ^+) products. It is found that the three states present a deep insertion well, with depths between 5.5 and 4 eV. The ground electronic state proceeds with no barrier, while the two excited states have a barrier in the products channel, which becomes submerged for bent configuration. The presence of the deep insertion well justifies the use of an adiabatic statistical method to calculate the reactive rate constant, which in turn is very similar to that obtained with a capture model <cit.> at 10 K. The difference increases with temperature, and the present results become 2.5 times larger than the constant value obtained 2 ×10^-10 cm^3/s at 500 K. The present results corroborate those obtained with classical capture models at low temperatures, differing by a factor of 2.5 at most. It should be noted, however, that this is due to different reasons for the two reactions studied here. For open shell reactants, as those treated here, for which experiments are difficult, it is required to do a detailed analysis of the reactivity of all initial degenerate electronic states of the reactants before generalizing these findings. The new rates have been implemented in a chemical network to compare with the observations of sulphur-bearing species towards TMC 1(CP). Model predictions are in reasonable agreement with the observation for most the sulphur-bearing species, except for OCS and NS, which cannot be fitted with our model. However, it is not possible to fit all of them with an unique chemical time, which suggests that dynamical effects are important for sulphur chemistry. The research leading to these results has received funding from MICIN (Spain) under grant PID2021-122549NB-C21 and from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 894321. N.B is grateful for support from the Polish National Agency for Academic Exchange (NAWA) Grant and also acknowledges TUBITAK's 2219-Program by scholarship no. 1059B192200348. P.S.Z. is grateful to National Science Centre of Poland for funding the project No 2019/34/E/ST4/00407. AF and PRM are grateful to Spanish MICIN for funding under grant PID2019-106235GB-I00. JRG thanks the Spanish MCINN for funding support under grant PID2019-106110GB-I00. JEP was supported by the Max-Planck Society. DNA acknowledges funding support from Fundación Ramón Areces through its international postdoc grant program. RLG would like to thank the "Physique Chimie du Milieu Interstellaire" (PCMI) programs of CNRS/INSU for their financial supports. 90 natexlab#1#1 [Adande et al.(2010)Adande, Halfen, Ziurys, Quan, & Herbst]Adande2010 Adande, G. R., Halfen, D. T., Ziurys, L. M., Quan, D., & Herbst, E. 2010, , 725, 561 [Aguado & Paniagua(1992)]Aguado-Paniagua:92 Aguado, A. & Paniagua, M. 1992, J. Chem. Phys., 96, 1265 [Aguado et al.(1998)Aguado, Tablero, & Paniagua]Aguado-etal:98 Aguado, A., Tablero, C., & Paniagua, M. 1998, Comput. Phys. Commun., 108, 259 [Agúndez & Wakelam(2013)]Agundez2013 Agúndez, M. & Wakelam, V. 2013, Chemical Reviews, 113, 8710 [Alexander(1998)]Alexander1998 Alexander, M. H. 1998, The Journal of chemical physics, 108, 4467 [Atahan et al.(2006)Atahan, Kłos, Zuchowski, & Alexander]Atahan2006 Atahan, S., Kłos, J., Zuchowski, P. S., & Alexander, M. H. 2006, Physical Chemistry Chemical Physics, 8, 4420 [Boogert et al.(1997)Boogert, Schutte, Helmich, Tielens, & Wooden]Boogert1997 Boogert, A. C. A., Schutte, W. A., Helmich, F. P., Tielens, A. G. G. M., & Wooden, D. H. 1997, , 317, 929 [Bulut et al.(2021)Bulut, Roncero, Aguado, Loison, Navarro-Almaida, Wakelam, Fuente, end R Le Gal, Caselli, Gerin, Hickson, Spezzano, Riviére-Marichalar, Alonso-Albi, Bachiller, Jimenez-Serra, Kramer, Tercero, Rodriguez-Baras, García-Burillo, Goicoechea, no Morales, Esplugues, Cazaux, Commercon, Laas, Kirk, Lattanzi, Martín-Domńech, noz Caro, Pineda, Ward-Thompson, Marcelino, Malinen, Friesen, Giuliano, Agxiúndez, & Hacar]Bulut-etal:21 Bulut, N., Roncero, O., Aguado, A., et al. 2021, Astron. AstroPhys., 646, A5 [Cazaux et al.(2022)Cazaux, Carrascosa, Muñoz Caro, Caselli, Fuente, Navarro-Almaida, & Riviére-Marichalar]Cazaux2022 Cazaux, S., Carrascosa, H., Muñoz Caro, G. M., et al. 2022, , 657, A100 [Cernicharo et al.(2021a)Cernicharo, Cabezas, Endo, Agúndez, Tercero, Pardo, Marcelino, & de Vicente]Cernicharo2021b Cernicharo, J., Cabezas, C., Endo, Y., et al. 2021a, , 650, L14 [Cernicharo et al.(2021b)Cernicharo, Cabezas, Endo, Marcelino, Agúndez, Tercero, Gallego, & de Vicente]Cernicharo2021a Cernicharo, J., Cabezas, C., Endo, Y., et al. 2021b, , 646, L3 [Cernicharo et al.(2018)Cernicharo, Lefloch, Agúndez, Bailleux, Margulès, Roueff, Bachiller, Marcelino, Tercero, Vastel, & Caux]Cernicharo2018 Cernicharo, J., Lefloch, B., Agúndez, M., et al. 2018, , 853, L22 [Davidson(1975)]Davidson:75 Davidson, E. R. 1975, J. Comp. Phys., 17, 87 [Denis-Alpizar et al.(2022)Denis-Alpizar, Quintas-Sánchez, & Dawes]Denis2022 Denis-Alpizar, O., Quintas-Sánchez, E., & Dawes, R. 2022, , 512, 5546 [Dubernet & Hutson(1994)]Dubernet-Hutson:94 Dubernet, M. L. & Hutson, J. 1994, J. Chem. Phys., 101, 1939 [Dunning(1989)]DUN89:1007 Dunning, T. H. 1989, J. Chem. Phys., 90, 1007 [Dunning & Jr.(1989)]Dunning:89 Dunning, T. H. & Jr. 1989, J. Chem. Phys., 90, 1007 [Esplugues et al.(2022)Esplugues, Fuente, Navarro-Almaida, Rodríguez-Baras, Majumdar, Caselli, Wakelam, Roueff, Bachiller, Spezzano, Rivière-Marichalar, Martín-Doménech, & Muñoz Caro]Esplugues2022 Esplugues, G., Fuente, A., Navarro-Almaida, D., et al. 2022, , 662, A52 [Ferrante et al.(2008)Ferrante, Moore, Spiliotis, & Hudson]Ferrante2008 Ferrante, R. F., Moore, M. H., Spiliotis, M. M., & Hudson, R. L. 2008, , 684, 1210 [Flower & Launay(1977)]Flower1977 Flower, D. R. & Launay, J. M. 1977, Journal of Physics B: Atomic and Molecular Physics, 10, 3673 [Fuente et al.(2016)Fuente, Cernicharo, Roueff, Gerin, Pety, Marcelino, Bachiller, Lefloch, Roncero, & Aguado]Fuente2016 Fuente, A., Cernicharo, J., Roueff, E., et al. 2016, , 593, A94 [Fuente et al.(2019)Fuente, Navarro, Caselli, Gerin, Kramer, Roueff, Alonso-Albi, Bachiller, Cazaux, Commercon, Friesen, García-Burillo, Giuliano, Goicoechea, Gratier, Hacar, Jiménez-Serra, Kirk, Lattanzi, Loison, Malinen, Marcelino, Martín-Doménech, Muñoz-Caro, Pineda, Tafalla, Tercero, Ward-Thompson, Treviño-Morales, Riviére-Marichalar, Roncero, Vidal, & Ballester]Fuente2019 Fuente, A., Navarro, D. G., Caselli, P., et al. 2019, , 624, A105 [Fuente et al.(2023)Fuente, Rivière-Marichalar, Beitia-Antero, Caselli, Wakelam, Esplugues, Rodríguez-Baras, Navarro-Almaida, Gerin, Kramer, Bachiller, Goicoechea, Jiménez-Serra, Loison, Ivlev, Martín-Doménech, Spezzano, Roncero, Muñoz-Caro, Cazaux, & Marcelino]Fuente2023 Fuente, A., Rivière-Marichalar, P., Beitia-Antero, L., et al. 2023, , 670, A114 [Fuentetaja et al.(2022)Fuentetaja, Agúndez, Cabezas, Tercero, Marcelino, Pardo, de Vicente, & Cernicharo]Fuentetaja2022 Fuentetaja, R., Agúndez, M., Cabezas, C., et al. 2022, , 667, L4 [Garrod et al.(2007)Garrod, Wakelam, & Herbst]Garrod2007 Garrod, R. T., Wakelam, V., & Herbst, E. 2007, , 467, 1103 [Georgievskii & Klippenstein(2005)]Georgievskii-Klippenstein:05 Georgievskii, Y. & Klippenstein, S. J. 2005, J.Chem. Phys., 122, 194103 [Gerner et al.(2014)Gerner, Beuther, Semenov, Linz, Vasyunina, Bihr, Shirley, & Henning]Gerner2014 Gerner, T., Beuther, H., Semenov, D., et al. 2014, , 563, A97 [Goicoechea et al.(2021)Goicoechea, Aguado, Cuadrado, Roncero, Pety, Bron, Fuente, Riquelme, Chapillon, Herrera, & Duran]Goicoechea-etal:21 Goicoechea, J. R., Aguado, A., Cuadrado, S., et al. 2021, AA, 647, A10 [Goicoechea & Cuadrado(2021)]Goicoechea2021 Goicoechea, J. R. & Cuadrado, S. 2021, , 647, L7 [Goicoechea et al.(2006)Goicoechea, Pety, Gerin, Teyssier, Roueff, Hily-Blant, & Baek]Goicoechea:2006 Goicoechea, J. R., Pety, J., Gerin, M., et al. 2006, , 456, 565 [Gómez-Carrasco et al.(2022)Gómez-Carrasco, Félix-González, Aguado, & Roncero]Gomez-Carrasco-etal:22 Gómez-Carrasco, S., Félix-González, D., Aguado, A., & Roncero, O. 2022, J. Chem. Phys., 157, 084301 [Gratier et al.(2016)Gratier, Majumdar, Ohishi, Roueff, Loison, Hickson, & Wakelam]Gratier2016 Gratier, P., Majumdar, L., Ohishi, M., et al. 2016, , 225, 25 [Grozdanov & Solov'ev(1982)]Grozdanov-Solovev:82 Grozdanov, T. P. & Solov'ev, E. A. 1982, J. Phys. B, 15, 1195 [Gu et al.(2006)Gu, Guo, Zhang, Mebel, & Kaiser]GU006:245 Gu, X., Guo, Y., Zhang, F., Mebel, A. M., & Kaiser, R. I. 2006, Faraday Discuss., 245 [Hapka et al.(2012)Hapka, Żuchowski, Szczȩśniak, & Chałasiński]Hapka2012 Hapka, M., Żuchowski, P. S., Szczȩśniak, M. M., & Chałasiński, G. 2012, J. Chem. Phys., 137, 164104 [Hasegawa & Herbst(1993)]Hasegawa1993 Hasegawa, T. I. & Herbst, E. 1993, , 261, 83 [Herzberg(1950)]Herzberg-diatomics Herzberg, G. 1950, Molecular spectra and molecular structure. I. Spectra of diatomic molecules (van Nostrand Reinhold Co. (New York)) [Hily-Blant et al.(2022)Hily-Blant, des Forêts, Faure, & Lique]Hily-Blant-etal:22 Hily-Blant, P., des Forêts, G. P., Faure, A., & Lique, F. 2022, , 658, A168 [Huber & Herzberg(1979)]Herzberg-etal:79 Huber, K. P. & Herzberg, G. 1979, Molecular Spectra and Molecular Structure. Vol IV. Constants of Diatomic Molecules (Van Nostrand, Toronto) [Jiménez-Escobar & Muñoz Caro(2011)]Jimenez2011 Jiménez-Escobar, A. & Muñoz Caro, G. M. 2011, , 536, A91 [Jiménez-Escobar et al.(2014)Jiménez-Escobar, Muñoz Caro, & Chen]Jimenez2014 Jiménez-Escobar, A., Muñoz Caro, G. M., & Chen, Y. J. 2014, , 443, 343 [Karplus et al.(1965)Karplus, Porter, & Sharma]Karplus-etal:65 Karplus, M., Porter, R. N., & Sharma, R. D. 1965, J. Chem. Phys., 43, 3259 [Kendall et al.(1992)Kendall, Dunning, & Harrison]KEN92:6796 Kendall, R. A., Dunning, T. H., & Harrison, R. J. 1992, J. Chem. Phys., 96, 6796 [Klos et al.(2004)Klos, Szczesniak, & Chalasinski *]Klos2004 Klos, J., Szczesniak, M. M., & Chalasinski *, G. 2004, International Reviews in Physical Chemistry, 23, 541 [Konings et al.(2021)Konings, Desrousseaux, Lique, & Loreau]Konings-etal:21 Konings, M., Desrousseaux, B., Lique, F., & Loreau, J. 2021, J. Chem. Phys., 155, 104302 [Laas & Caselli(2019)]Laas-Caselli:19 Laas, J. C. & Caselli, P. 2019, , 624, A108 [Lucas & Liszt(2002)]LucasLiszt2002 Lucas, R. & Liszt, H. S. 2002, , 384, 1054 [McClure et al.(2023)McClure, Rocha, Pontoppidan, Crouzet, Chu, Dartois, Lamberts, Noble, Pendleton, Perotti, Qasim, Rachid, Smith, Sun, Beck, Boogert, Brown, Caselli, Charnley, Cuppen, Dickinson, Drozdovskaya, Egami, Erkal, Fraser, Garrod, Harsono, Ioppolo, Jiménez-Serra, Jin, Jørgensen, Kristensen, Lis, McCoustra, McGuire, Melnick, Ã-berg, Palumbo, Shimonishi, Sturm, van Dishoeck, & Linnartz]McClure2023 McClure, M. K., Rocha, W. R. M., Pontoppidan, K. M., et al. 2023, Nature Astronomy, 7, 431 [Minissale et al.(2016)Minissale, Dulieu, Cazaux, & Hocuk]Minissale2016 Minissale, M., Dulieu, F., Cazaux, S., & Hocuk, S. 2016, , 585, A24 [Nagy & Lendvay(2017)]Nagy-Lendvay:17 Nagy, T. & Lendvay, G. 2017, J. Phys. Chem. Lett., 8, 4621 [Navarro-Almaida et al.(2023)Navarro-Almaida, Bop, Lique, Esplugues, Rodríguez-Baras, Kramer, Romero, Fuente, Caselli, Rivière-Marichalar, Kirk, Chacón-Tanarro, Roueff, Mroczkowski, Bhandarkar, Devlin, Dicker, Lowe, Mason, Sarazin, & Sievers]Navarro2023 Navarro-Almaida, D., Bop, C. T., Lique, F., et al. 2023, , 670, A110 [Navarro-Almaida et al.(2020)Navarro-Almaida, Le Gal, Fuente, Rivière-Marichalar, Wakelam, Cazaux, Caselli, Laas, Alonso-Albi, Loison, Gerin, Kramer, Roueff, Bachiller, Commerçon, Friesen, García-Burillo, Goicoechea, Giuliano, Jiménez-Serra, Kirk, Lattanzi, Malinen, Marcelino, Martín-Domènech, Muñoz Caro, Pineda, Tercero, Treviño-Morales, Roncero, Hacar, Tafalla, & Ward-Thompson]Navarro2020 Navarro-Almaida, D., Le Gal, R., Fuente, A., et al. 2020, , 637, A39 [Ocaña et al.(2017)Ocaña, Jiménez, Ballesteros, Canosa, Antiñolo, Albadalejo, Agúndez, Cernicharo, Zanchet, del Mazo, Roncero, & Aguado]Ocana-etal:17 Ocaña, A. J., Jiménez, E., Ballesteros, B., et al. 2017, AstroPhys. J., 850, 28 [Palumbo et al.(1997)Palumbo, Geballe, & Tielens]Palumbo1997 Palumbo, M. E., Geballe, T. R., & Tielens, A. G. G. M. 1997, , 479, 839 [Palumbo et al.(1995)Palumbo, Tielens, & Tokunaga]Palumbo1995 Palumbo, M. E., Tielens, A. G. G. M., & Tokunaga, A. T. 1995, , 449, 674 [Páramo et al.(2008)Páramo, Canosa, Le Picard, & Sims]PAR008:9591 Páramo, A., Canosa, A., Le Picard, S. D., & Sims, I. R. 2008, J. Phys. Chem. A, 112, 9591 [Qu & Bowman(2016)]Qu-Bowman:16 Qu, C. & Bowman, J. M. 2016, J. Phys. Chem. A, 120, 4988 [Quack & Troe(1974)]Quack-Troe:74 Quack, M. & Troe, J. 1974, Ber. Bunsenges. Phys. Chem, 78, 240 [Reddy et al.(2003)Reddy, Nazeer Ahammed, Rama Gopal, & Baba Basha]RED003:419 Reddy, R. R., Nazeer Ahammed, Y., Rama Gopal, K., & Baba Basha, D. 2003, Astrophys. Space Sci., 286, 419 [Riaplov et al.(2003)Riaplov, Wyss, Maier, Panten, Chambaud, Rosmus, & Fabian]RIA003:15 Riaplov, E., Wyss, M., Maier, J. P., et al. 2003, J. Mol. Spectrosc., 222, 15 [Rivière-Marichalar et al.(2019)Rivière-Marichalar, Fuente, Goicoechea, Pety, Le Gal, Gratier, Guzmán, Roueff, Loison, Wakelam, & Gerin]Riviere2019 Rivière-Marichalar, P., Fuente, A., Goicoechea, J. R., et al. 2019, , 628, A16 [Rodríguez-Baras et al.(2021)Rodríguez-Baras, Fuente, Riviére-Marichalar, Navarro-Almaida, Caselli, Gerin, Kramer, Roueff, Wakelam, Esplugues, García-Burillo, Le Gal, Spezzano, Alonso-Albi, Bachiller, Cazaux, Commercon, Goicoechea, Loison, Treviño-Morales, Roncero, Jiménez-Serra, Laas, Hacar, Kirk, Lattanzi, Martín-Doménech, Muñoz-Caro, Pineda, Tercero, Ward-Thompson, Tafalla, Marcelino, Malinen, Friesen, & Giuliano]Rodriguez-Baras2021 Rodríguez-Baras, M., Fuente, A., Riviére-Marichalar, P., et al. 2021, , 648, A120 [Roncero et al.(2018)Roncero, Zanchet, & Aguado]Roncero-etal:18 Roncero, O., Zanchet, A., & Aguado, A. 2018, Phys. Chem. Chem. Phys., 20, 25951 [Ruaud et al.(2016)Ruaud, Wakelam, & Hersant]Ruaud2016 Ruaud, M., Wakelam, V., & Hersant, F. 2016, , 459, 3756 [Saito et al.(1987)Saito, Kawaguchi, Yamamoto, Ohishi, Suzuki, & Kaifu]SAI87:L115 Saito, S., Kawaguchi, K., Yamamoto, S., et al. 1987, ApJL, 317, L115 [Sanz-Sanz et al.(2015)Sanz-Sanz, Aguado, Roncero, & Naumkin]Sanz-Sanz-etal:15 Sanz-Sanz, C., Aguado, A., Roncero, O., & Naumkin, F. 2015, J. Chem. Phys., 143, 234303 [Senekowitsch et al.(1990)Senekowitsch, Carter, Rosmus, & Werner]Senekowitsch-etal:90 Senekowitsch, J., Carter, S., Rosmus, P., & Werner, H.-J. 1990, Chem. Phys., 147, 281 [Shingledecker et al.(2020)Shingledecker, Lambers, Laas, Vasyunin, Herbst, Kästner, & Caselli]Shingledecker2020 Shingledecker, C. N., Lambers, T., Laas, J. C., et al. 2020, AstroPhys. J., 888, 52 [Simon et al.(1997)Simon, Stutzki, Sternberg, & Winnewisser]Simon1997 Simon, R., Stutzki, J., Sternberg, A., & Winnewisser, G. 1997, , 327, L9 [Song et al.(2016)Song, Zhang, Gao, & Meng]Song-etal:16 Song, Y.-Z., Zhang, L.-L., Gao, S.-B., & Meng, Q.-T. 2016, Scientific Rep., 6, 37734 [Spezzano et al.(2022)Spezzano, Fuente, Caselli, Vasyunin, Navarro-Almaida, Rodríguez-Baras, Punanova, Vastel, & Wakelam]Spezzano2022 Spezzano, S., Fuente, A., Caselli, P., et al. 2022, , 657, A10 [Sternberg & Dalgarno(1995)]SternbergDalgarno1995 Sternberg, A. & Dalgarno, A. 1995, , 99, 565 [Stoecklin et al.(1988)Stoecklin, Halvick, & Rayez]Stoecklin-etal:88 Stoecklin, T., Halvick, P., & Rayez, J. C. 1988, J. Mol. Struct. (Theochem), 163, 267 [Stoecklin et al.(1990a)Stoecklin, Rayez, & Duguay]Stoecklin-etal:90a Stoecklin, T., Rayez, J. C., & Duguay, B. 1990a, Chem. Phys., 148, 381 [Stoecklin et al.(1990b)Stoecklin, Rayez, & Duguay]Stoecklin-etal:90b Stoecklin, T., Rayez, J. C., & Duguay, B. 1990b, Chem. Phys., 148, 399 [Tarroni et al.(2007)Tarroni, Carter, & Handy]TAR007:1129 Tarroni, R., Carter, S., & Handy, N. C. 2007, Mol. Phys., 105, 1129 [Vastel et al.(2018)Vastel, Quénard, Le Gal, Wakelam, Andrianasolo, Caselli, Vidal, Ceccarelli, Lefloch, & Bachiller]Vastel2018 Vastel, C., Quénard, D., Le Gal, R., et al. 2018, , 478, 5514 [Vidal et al.(2017)Vidal, Loison, Jaziri, Ruaud, Gratier, & Wakelan]Vidal-etal:17 Vidal, T. H. G., Loison, J.-C., Jaziri, A. Y., et al. 2017, Mon. Not. R. Astron. Soc., 469, 435 [Visser et al.(2019)Visser, Beck, Bornhauser, Knopp, van Bokhoven, Radi, Gourlaouen, & Marquardt]VIS019:1645 Visser, B., Beck, M., Bornhauser, P., et al. 2019, Mol. Phys., 117, 1645 [Voronin(2004)]Voronin:02 Voronin, A. I. 2004, Chem. Phys., 297, 49 [Wakelam et al.(2021)Wakelam, Dartois, Chabot, Spezzano, Navarro-Almaida, Loison, & Fuente]Wakelam2021 Wakelam, V., Dartois, E., Chabot, M., et al. 2021, , 652, A63 [Werner & Knowles(1988a)]Werner-Knowles:88 Werner, H. J. & Knowles, P. J. 1988a, J. Chem. Phys., 89, 5803 [Werner & Knowles(1988b)]Werner-Knowles:88b Werner, H. J. & Knowles, P. J. 1988b, Chem. Phys. Lett., 145, 514 [Werner et al.(2012)Werner, Knowles, Knizia, Manby, & Schütz]MOLPRO-WIREs Werner, H.-J., Knowles, P. J., Knizia, G., Manby, F. R., & Schütz, M. 2012, WIREs Comput Mol Sci, 2, 242 [Woon & Herbst(2009)]Woon-Herbst:09 Woon, D. E. & Herbst, E. 2009, ApJ. Sup. Series, 185, 273 [Zanchet et al.(2018)Zanchet, del Mazo, Aguado, Roncero, Jiménez, Canosa, Agúndez, & Cernicharo]Zanchet-etal:18 Zanchet, A., del Mazo, P., Aguado, A., et al. 2018, PCCP, 20, 5415 [Zanchet et al.(2016)Zanchet, Roncero, & Bulut]Zanchet-etal:16 Zanchet, A., Roncero, O., & Bulut, N. 2016, Phys. Chem. Chem. Phys., 18, 11391 [Zanchet et al.(2009)Zanchet, Roncero, González-Lezana, Rodríguez-López, Aguado, Sanz-Sanz, & Gómez-Carrasco]Zanchet-etal:09b Zanchet, A., Roncero, O., González-Lezana, T., et al. 2009, J. Phys. Chem. A, 113, 14488 [Zeimen et al.(2003)Zeimen, Klos, Groenenboom, & van der Avoird]Zeimen-etal:03 Zeimen, W. B., Klos, J., Groenenboom, G. C., & van der Avoird, A. 2003, J. Chem. Phys., 118, 7340 [Zhang et al.(2018)Zhang, Song, Gao, & Meng]Zhang-etal:18b Zhang, L. L., Song, Y. Z., Gao, S. B., & Meng, Q. T. 2018, J. Phys. Chem. A, 122, 4390
http://arxiv.org/abs/2307.00597v1
20230702154038
Lingering Times at Resonance: The Case of Sb-based Tunneling Devices
[ "Edgar David Guarin Castro", "Andreas Pfenning", "Fabian Hartmann", "Andrea Naranjo", "Georg Knebl", "Marcio Daldin Teodoro", "Gilmar Eugenio Marques", "Sven Höfling", "Gerald Bastard", "Victor Lopez-Richard" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "quant-ph", "81" ]
Departamento de Física, Universidade Federal de São Carlos, 13565-905 São Carlos, SP, Brazil [email protected] Technische Physik, Physikalisches Institut and Würzburg‐Dresden Cluster of Excellence ct.qmat, Am Hubland, D-97074 Würzburg, Germany Technische Physik, Physikalisches Institut and Würzburg‐Dresden Cluster of Excellence ct.qmat, Am Hubland, D-97074 Würzburg, Germany Departamento de Física, Universidade Federal de São Carlos, 13565-905 São Carlos, SP, Brazil Technische Physik, Physikalisches Institut and Würzburg‐Dresden Cluster of Excellence ct.qmat, Am Hubland, D-97074 Würzburg, Germany Departamento de Física, Universidade Federal de São Carlos, 13565-905 São Carlos, SP, Brazil Departamento de Física, Universidade Federal de São Carlos, 13565-905 São Carlos, SP, Brazil Technische Physik, Physikalisches Institut and Würzburg‐Dresden Cluster of Excellence ct.qmat, Am Hubland, D-97074 Würzburg, Germany Département de Physique, Ecole Normale Supérieure de Paris (ENS/PSL), Université PSL (Paris Sciences and Letters), F75005, Paris, France Departamento de Física, Universidade Federal de São Carlos, 13565-905 São Carlos, SP, Brazil Concurrent natural time scales related to relaxation, recombination, trapping, and drifting processes rule the semiconductor heterostructures' response to external drives when charge carrier fluxes are induced. This paper highlights the role of stoichiometry not only for the quantitative tuning of the electron-hole dynamics but also for significant qualitative contrasts of time-resolved optical responses during the operation of resonant tunneling devices. Therefore, similar device architectures and different compositions have been compared to elucidate the correlation among structural parameters, radiative recombination processes, and electron-hole pair and minority carrier relaxation mechanisms. When these ingredients intermix with the electronic structure in Sb-based tunneling devices, it is proven possible to assess various time scales according to the intensity of the current flux, contrary to what has been observed in As-based tunneling devices with similar design and transport characteristics. These time scales are strongly affected not only by the filling process in the Γ and L states in Sb-based double-barrier quantum wells but also by the small separation between these states, compared to similar heterostructures based on As. Lingering times at resonance: The case of Sb-based tunneling devices V. Lopez-Richard August 1, 2023 ==================================================================== § INTRODUCTION Quantum tunneling heterostructures are envisaged for the development of optoelectronic devices operating at high speeds and high frequencies <cit.>, exploiting the intertwining between electrical currents and optical emissions produced during their operation. This correlation can be controlled by tuning the charge carrier dynamics determined by both the architecture and the materials used during the fabrication of the device. In this regard, resonant tunneling diodes (RTDs) offer a simple structure to explore and investigate charge carrier dynamics by combining quantum transport <cit.>, electronic structure tuning by material parameters <cit.>, thermalization and recombination processes <cit.>, as well as excitation and relaxation mechanisms <cit.>. Understanding how these features intermix during the resonant tunneling of majority carriers through a double barrier structure (DBS) and how they affect the temporal evolution of the carrier dynamics in the quasi-bond states are the main objectives of this work. For this purpose, we have decided to show the contrasting temporal evolution of the carrier dynamics in arsenic (As-) and antimony (Sb-) based DBSs and provide a unified description for seemingly divergent pictures. The analysis is performed by correlating the transport characteristics with the temporal evolution of the double-barrier quantum well (QW) optical response in both systems. This approach allows unveiling the dependence of the carrier dynamics on the flux of the majority carriers through the DBS, by means of the characterization of distinctive time scales observed at different current (voltage) conditions. The results disclose an apparent counterintuitive relationship between transport and time-resolved optical measurements. While high conduction-band offsets in Sb-based DBSs induce longer escape times of majority carriers as compared with As-based DBSs, similar current densities in both systems may give the appearance of comparable time scales. However, they are not and we can demonstrate that the emergence of contrasting time scales in these systems results from the competition among various relaxation and recombination mechanisms inside the DBS, whose weights depend not only on the modulation of the charge carrier population with the current condition but also on particular structural parameters related to composition. In particular, filling-of-states processes, a small separation between Γ and L bands inside the double-barrier QW, and non-resonant currents in Sb-based DBSs allow solving the seeming discrepancy between transport and time-resolved measurements, as well as explaining the stronger dependence of the carriers dynamics temporal evolution on the electrical current. Our observations are supported by a model showing that, in Sb-based systems, the amount of current through the resonant channel enables the occurrence of a fast non-radiative relaxation process during out-of-resonance conditions and a slow recombination and minority-carriers relaxation dynamic at resonance. These findings provide clues to comprehend the temporal evolution of the carrier dynamics in quasi-two-dimensional quantized states, discerning the role of the minority carriers in the dynamics along with the influence of limiting factors related to the flux of the electrical current. § SAMPLES AND EXPERIMENTAL METHODS Two n-type RTDs were employed for this research, labeled in what follows as RTD-As and RTD-Sb. Both were prepared via molecular beam epitaxy with pseudomorphically grown ternary emitter prewells. The As-based RTD, used as a reference sample, was grown on a Si n-doped GaAs substrate, including an In_0.15Ga_0.85As emitter prewell and a double-barrier QW with thicknesses of 5 and 4 nm, respectively. The QW is sandwiched by two 3.5-nm thick Al_0.6Ga_0.4As barriers, and the DBS is surrounded by two 20-nm thick undoped GaAs spacer layers, as displayed in Fig. <ref> (a) by the simulated band profile of the conduction band (CB) at the Γ (solid black line) and L (dashed green line) minimums, and the valence band (VB) maximum (solid red line) <cit.>. CB barriers at the Γ minimum with a height of around 0.5 eV are expected in this kind of DBSs. Moreover, the separation between the Γ and L minima inside the double barrier QW is Δ E_Γ-L≈ 0.33 eV, which prevents electrons from occupying L states. A 300 nm thick high-bandgap Al_0.2Ga_0.8As optical window was deposited on top of the heterostructure to avoid optical loss by absorption and guarantee optical access for infrared wavelengths. The details of the RTD-As structure are outlined in Ref. <cit.> where it was labeled as `S-InGaAs'. In turn, RTD-Sb is an Sb-based RTD grown on a Te n-doped GaSb(100) substrate. Its DBS is composed of an emitter prewell and a double-barrier QW of GaAs_0.15Sb_0.85, with thicknesses of 5 and 7 nm, respectively. The QW is surrounded by two 4.5-nm thick AlAs_0.08Sb_0.92 barriers and the DBS is enclosed by two 20-nm thick undoped GaSb spacer layers, as represented in Fig. <ref> (b) <cit.>. Here, CB barriers are higher than those in RTD-As, with heights of around 1.20 eV at the Γ minimum. In addition, the Γ-L energy separation inside the double-barrier QW is Δ E_Γ-L≈ 0.10 eV, which is lower as compared with RTD-As. At the top of the structure a 220 nm thick high-bandgap Al_0.30Ga_0.70As_0.03Sb_0.97 optical window was deposited, also to reduce optical losses and to favor infrared optical access. The heterostructure layout of RTD-Sb is fully described in Ref. <cit.> where it was labeled as `RTD 3'. Both samples were cooled down to a nominal temperature of T=4 K in an ultra-low vibration cryostat (Attocube AttoDRY1000), associated with a homemade confocal microscope and a SourceMeter (Keithley 2400), to study their transport characteristics and the temporal evolution of the optical response emitted from their QWs at different applied voltages. Two excitation lasers (PicoQuant LDH Series) were employed for the optical excitation: one laser with emission energy ħω=1.70 eV and an optical power density of 2.29 kW/cm^2 to excite RTD-As, and another laser with energy ħω=1.15 eV and an optical power density of 12.5 kW/cm^2, used to excite RTD-Sb. Both lasers were operated in continuous-wave and pulsed modes to characterize the emission spectra and the temporal evolution via photoluminescence (PL) and time-resolved PL spectroscopies, respectively. During PL measurements, the optical responses of RTD-As and RTD-Sb observed at different bias voltages were dispersed by a 75 and 50 cm spectrometer (Andor Shamrock), respectively. Then, the signals were detected by a high-speed Si charge-coupled device (Andor iDus 420) and a high-resolution InGaAs diode array (Andor DU491A) for RTD-As and RTD-Sb, respectively. During time-resolved PL measurements, both lasers operated at a repetition rate of 80 MHz, with a pulse duration of around 100 ps, while the transient responses were detected by an infrared photomultiplier tube (PicoQuant Si PMT Hybrid and Hamamatsu InGaAs/InP H10330B-75, for RTD-As and RTD-Sb measurements, respectively) coupled to a time-correlated single-photon counting electronics (PicoQuant PicoHarp300). § TRANSPORT AND TIME-RESOLVED CHARACTERISTICS The current-density, J(V), characteristics observed under illumination for RTD-As and RTD-Sb are depicted in Figs. <ref> (c) and (d), showing resonances for majority carriers transport at 3.35 and 2.10 V, with current densities peaks of 0.14 and 0.21 kA cm^-2, and peak-to-valley current ratios of 2.9 and 3.2, respectively. Despite CB barriers in RTD-As being lower and thinner than the barriers in RTD-Sb, the current densities and peak-to-valley current ratios are comparable in both systems, which in principle implies similar escape times for electrons inside the double-barrier QWs. In order to verify this assertion, time-resolved PL measurements were carried out in both systems. The time-resolved spectra for RTD-As and RTD-Sb at different applied voltages are shown in Figs. <ref> (a) and (b), respectively. The decay curves before (top panels), at (medium panels), and after resonance (bottom panels) are presented for the voltages indicated by vertical arrows in the J(V) characteristics in Figs. <ref> (c) and (d). For RTD-As, the curves are presented at V=1.00 V (black), V=3.35 V (red), and V=4.80 V (dark red). For RTD-Sb the responses are depicted at V=1.50 V (black), V=2.10 V (red), and V=3.30 V (dark red). In both cases, the corresponding voltages after resonance were chosen to produce the same current condition as on-resonance conditions. All transient responses were detected at 1.55 and 0.93 eV for RTD-As and RTD-Sb, respectively, which correspond to the energies of maximum QW PL emission produced by the radiative recombination between the first confined levels of the CB Γ minimum and the VB maximum in the double-barrier QWs. The PL intensities of the transient responses have been normalized and the Instrument Response Function (IRF) for each setup is also presented for reference as gray background curves. The IRF was measured by the analysis of the back reflected laser from the sample surface. In the case of RTD-Sb, the IRF shows an after-pulse at around 4.3 ns, unlike the IRF for RTD-As, as presented in Fig. <ref> (b). This after-pulse peak is produced at a high counting regime by the elastic backscattering of electrons at the first dynode of the photomultiplier tube <cit.>, which is induced by the high amplification gain and the high voltage (∼ 800 V) in between the photocathode of the photomultiplier tube. Yet, the contribution of this peak to the transient response is negligible since it represents only 0.01% of the counts in the main peak and is out of the range used for the fitting procedures. The transient curves for RTD-As in Fig. <ref> (a) reveal a fast exponential decay followed by a slow one, for both before and on-resonance conditions. At higher voltages, only the slow decay prevails. In order to extract effective lifetimes from these decays, a standard reconvolution procedure <cit.> was performed to fit the RTD-As decay curves. This procedure allows avoiding the influence of the IRF on the fast decay. The results of the fitting procedures are shown as blue curves. In contrast, the transient curves of RTD-Sb displayed in Fig. <ref> (b) show a shoulder-like emission at the peak of maximum intensity observed after 2 ns, which hampers the implementation of reconvolution models for the extraction of the effective lifetimes. However, during resonance conditions at V=2.10 V (medium panel), the slow intensity decay produces a plateau that extends up to t≈ 5 ns. This decay can be fitted by using a mono-exponential function of the form exp(-t / τ_eff), which gives an effective lifetime of τ_eff=6 ns. This value is similar to the limit of radiative recombination reported for GaAsSb/GaAs QWs <cit.>. The same fitting procedure was performed for the decay spectra between 5 and 6 ns (framed within vertical dashed lines), as indicated by straight blue lines in Fig. <ref> (b). Thus, the influence of either the IRF, the plateau, or highly noisy regions at longer times was avoided. The effective lifetimes obtained by means of the fitting procedures just described are displayed in Figs. <ref> (a) and (b) as a function of the applied voltage for RTD-As and RTD-Sb, respectively. The corresponding J(V) characteristics have also been plotted for reference (blue dashed lines). Note that, the effective lifetimes in RTD-As have a negligible voltage dependence, presenting values of ∼0.3 and ∼3.0 ns for the fast and slow (not presented here) decays, respectively. The slow decay can be attributed to an accumulation of photogenerated carriers at the prewell and their subsequent non-resonant tunneling <cit.>. Thus, the fast decay can be associated with the carrier dynamics inside the double-barrier QW which can be affected by radiative and non-radiative recombination processes. In contrast, RTD-Sb shows a remarkable non-monotonic voltage dependence of the effective lifetime with the applied voltage. For out-of-resonance conditions, τ_eff shows a value of ∼0.9 ns but at resonance, it increases with the current, peaking at ∼3.0 ns. This leads to a factor of 3 when comparing the out-of-resonance effective lifetimes of both RTDs and a full-order factor in resonant conditions. § CHARGE CARRIER DYNAMICS IN A DBS The difference between effective lifetimes in both samples can be initially explained by the nature of the radiative processes in each type of system. In bulk GaSb samples, <cit.> radiative lifetimes are longer than in bulk GaAs samples, <cit.> depending on the temperature and donor concentrations. As reported in Ref. <cit.>, the radiative lifetime τ_0 depends on the energy of the optical emission ħω_0 according to the expression τ_0 ∝ (|⟨ψ_e | ψ_h||⟩^2 E_p r ħω_0)^-1, where |⟨ψ_e | ψ_h||⟩^2 accounts for the overlapping between electron and hole wave functions, E_p is the Kane energy, and r is the refractive index in the material <cit.>. As a consequence, higher emission energies imply shorter radiative lifetimes. Then, by taking the ratio between the double-barrier QW emission energies for both RTDs, one gets a factor of 1.55/0.93 ≈ 1.7 which is almost half the factor obtained when comparing their effective lifetimes. This result suggests that recombination processes alone are not enough to account for the discrepancy between effective lifetimes in both samples. §.§ Temporal evolution of carriers population and escaping times To understand the effective lifetime dependence on the current condition in both samples, a three-level rate equation model has been considered to study the carrier dynamics inside the DBSs. The model contemplates carrier interactions via radiative recombination as well as relaxation or escape processes. In this way, the time evolution of the carriers population in the double-barrier QWs, after their injection by resonant or non-resonant channels, can be described as dn_E/dt = S_e -n_E/τ_e - n_E/τ_T(1 - n/N_e), dn/dt = n_E/τ_T( 1 - n/N_e) - n p/τ_0 - n/τ_e, dp/dt = S_h - n p/τ_0 - p/τ_h, where n_E is the electron density at the emitter side while n and p are the electron and hole densities in the ground quasi-bond state of the double-barrier QWs. S_i and τ_i are respectively the current source and lifetime for electrons and holes (i=e,h). N_e is the electronic density of states, and τ_T∝𝒯^-1 is the electron tunneling time through the emitter barrier proportional to the transmission probability 𝒯. In turn, τ_0 represents the optical recombination time and the term (1- n / N_e) refers to the saturation process in the double-barrier QWs caused by the finite density of states. This saturation can induce a plateau in the transient response at the beginning of the optical decay that has also been detected in Ref. <cit.>. Figure <ref> (a) depicts the processes considered in the three-level model after excitation with a laser pulse of energy ħω, and when the DBS is under an applied bias voltage which produces a voltage drop V_DBS. The model also assumes a constant source of electrons S_e responsible for the injection of majority carriers from the emitter side into the double-barrier QW. The injection of electrons also depends on the electron tunneling time τ_T. At the resonance voltage, the injection rate increases due to the accumulation of electrons in the prewell which acts as a reservoir for electrons. On the other hand, holes are efficiently transported through the DBS at the electron resonant condition, which hampers their accumulation at the right-hand side of the DBS in Fig. <ref> (a) but increases the probability of optical recombination inside the double-barrier QW. The absence of hole accumulation at the resonant electron condition is a consequence of the low barrier height in the valence band <cit.>. Consequently, accumulation of holes can only be detected at low voltages, as measured by the presence of a voltage shift in the J(V) characteristic under illumination for the same RTD-Sb <cit.>, as well as for a p-type Sb-based RTD <cit.>. Inside the double-barrier QW, carriers can radiatively recombine with a time scale τ_0 or decay via escape or non-radiative recombination processes with different time scales τ_e and τ_h for electrons and holes, respectively. The latter, which for simplicity will henceforth be referred to as the escape times, can be estimated by considering a ping-pong model inside the QW. According to this model, a carrier oscillates classically inside the well with a period τ_osc(E) <cit.>, with E being the mechanical energy. The semiclassical approximation of the escape frequency is then given by <cit.> 1/τ_i = 𝒯_i(E) 1/τ_osc,i(E), with i=e,h and τ_osc,i(E) as the period of the classical oscillations driven by an electrical force, eF, where F is the local electric field. Then, the solution of the classical equation of motion gives as a result <cit.> τ_osc,i(E) = 2 √(2m^*_i E_i/(eF)^2), where m^*_i and E_i are, respectively, the carriers' effective mass and confinement energy in the double-barrier QW. If a triangular well is considered due to the applied voltage, then E_i is given by <cit.> E_i = ζ_1 eF L_QW[ħ^2/2m^*_i eF L^3_QW]^1/3, where ζ_1=-2.33 corresponds to the first zero of the Airy Function (Ai(ζ_1)=0), and L_QW is the thickness of the double-barrier QW. In turn, the local electric field at the DBS can be calculated as F=|V_DBS|/l_DBS, with V_DBS as a function of the applied voltage V, and defined by  <cit.> |V_DBS(V)| = -V_0 (1-√(1+2|V|/V_0)), with V_0=el^2_DBS N_D^+/ε, l_DBS as an effective DBS length, N_D^+ as a constant 3D donor density, and ε as the permittivity. For the calculations, effective permitivitties of ε = 13 ε_0 and 15 ε_0, effective DBS length of l_DBS=31 and 41 nm, and nominal donor densities of N^+_D = 2× 10^17 cm^-3 and 5× 10^17 cm^-3, were employed for RTD-As and RTD-Sb, respectively. Thus, electric fields of around 260 and 310 kV cm^-1 can be expected for RTD-AS and RTD-Sb, respectively, under resonance conditions. By considering a thick enough rectangular barrier, the transmission in Eq. <ref> for different effective masses inside and outside the barrier was taken as 𝒯_i(E_i) = 16 (Υ_i + 1/Υ_i)^-2exp(-2 k_b,i L_b), which is an adaptation of the expression for equal effective masses that can be found in Ref. Manasreh2005. Here, L_b is the barrier thickness, k_b,i = √(2 m^*_b,i (U_b,i - E_i)) / ħ, Υ_i = k_i m^*_b,i / k_b,i m^*_i, and k_i = √(2 m^*_i E_i) / ħ. Here, m^*_b,i and m^*_i are the carrier effective masses in the barrier and QW, respectively, and U_b,i is the barrier height. In our calculations, escape times were determined for electrons and holes in both samples by replacing Eqs. <ref>-<ref> into Eq. <ref> and taking the nominal values of the barrier and double-barrier QW thicknesses. For electrons m^*_e=0.057m_0 and 0.041m_0, m^*_b,e=0.100m_0 and 0.123m_0 <cit.>, and U_b,e=0.55 and 1.23 eV for RTD-As and RTD-Sb, respectively. In turn, for holes m^*_h=0.495m_0 and 0.405m_0, m^*_b,h=0.600m_0 and 0.919m_0 <cit.>, and U_b,h=0.40 and 0.47 eV for RTD-As and RTD-Sb, respectively. Barrier heights in both samples were determined by means of the calculated band profiles shown in Figs. <ref> (a) and (b). Figures <ref> (b) and (c) display the calculated escape times from the double-barrier QW for electrons and holes, respectively, as a function of the local electric field in the DBS. Black and blue lines represent the escape times for RTD-As and RTD-Sb, respectively. Results suggest that the tunneling probability is reduced and consequently, the escape time increases in RTD-Sb due to its higher CB offsets as compared with RTD-As. The increase in the escape time for RTD-Sb is also a consequence of thicker barriers, as supported by calculations using Eq. <ref> when varying the barrier thickness (not shown here). The reduction of the tunneling rate for thicker barriers has been also observed in different RTDs, as reported in Refs. Tsuchiya1987,VanHoof1992_TRPL. A lower tunneling rate, as expected for RTD-Sb, can lead to a reduction in the resonance current, contrary to the observed current-density characteristics shown in Figs. <ref> (c) and (d). However, these current densities are the result of the addition of coherent and incoherent contributions <cit.>. The former current is associated with resonant tunneling processes, while the latter corresponds to a sequential tunneling of carriers that lost their phase coherence and energy due to scattering processes. Consequently, incoherent currents in RTD-Sb can be higher than in RTD-As, thus making current densities comparable in both systems. Escape times exhibited in Figs. <ref> (b) and (c) also show that electrons can escape from the double-barrier QW faster than holes, which can present slow relaxation dynamics in both systems. In RTD-Sb, electron escape times can be also of the order of nanoseconds, similar to the order of the optical recombination time obtained from the plateau of the transient curves. These results give an indication of the order of magnitude expected for carriers' escape times which is necessary to solve the set of Eqs. <ref>-<ref> and then, find the correlation between effective lifetimes and voltage conditions, as presented in the following sections. §.§ Transmission dependence of carriers population Based on the above considerations, the analysis of the carrier dynamics starts, for sake of simplicity, by considering optical recombination times longer than any other time-scale, τ_0 →∞. This allows neglecting the second term on the right side of Eqs. <ref> and <ref>. In addition, since electrons are majority carriers inside the double-barrier QW, then changes produced in the number of holes injected by a constant source S_h are also negligible and we can set S_h/S_e→ 0. Consequently, the solutions for the equation system depend on the initial number of electrons and holes produced by the laser pulse and considered as n_E(0)=n^0_E+Δ n_E, n(0)=n_0, and p(0)=S_hτ_h + Δ p_0 for electrons at the emitter side, and electrons and holes inside the double-barrier QW, respectively. Here Δ n_E and Δ p_0 are variations in the initial carrier populations produced by photocreation processes. Then, by assuming the stationary condition Δ n_E=Δ p_0=0, one obtains n^0_E = S_e/1/τ_e + 1/τ_T(1-n_0/N_e), and n_0 = (-b+√(()b^2 + 4a))/2a where a=-1/(S_eN_eτ_e) and b=1/N_e + τ_T/(S_eτ^2_e) + 1/(S_eτ_e). Based on these initial conditions, Eqs. <ref>-<ref> were solved numerically for n and p, by assuming τ_h/τ_e=10^2 and S_h/S_e=0.1. The optical recombination intensity was then calculated as the product between carrier populations, np, and plotted as a function of time as shown in Figs. <ref> (a) and (b) for τ_0 →∞ and τ_0 ∼τ_e, respectively. Blue lines indicate transient responses at different transmission probabilities. When τ_0 →∞, the calculated response is similar to the transient curves observed for RTD-Sb. Under this condition, three time scales can be obtained at three different temporal ranges as indicated by shadow regions in Fig. <ref> (a): a slow decay at short ranges (light gray region), a fast decay at intermediate ranges (gray region), and another slow decay at long ranges (dark gray region). The former decay is enhanced when 𝒯→ 1 (black line) or Δ n_E→∞. This decay is a consequence of the filling process in the quasi-bond states of the double-barrier QW. Taking into account that the recombination time extracted from the plateau displayed in Fig. <ref> (b) at the resonance condition for RTD-Sb is 6 ns, then it is expected that the escape time for electrons in this DBS must be in the sub-nanosecond scale. However, this estimation differs from the value of ∼60 ns presented in Fig. <ref> (b). This discrepancy can be ascribed to the contribution of faster escape channels from the Γ ground state. According to the band profile simulation of RTD-Sb presented in Fig. <ref> (b) <cit.>, the separation between Γ (blue dotted line) and L (pink dotted line) ground states at the double-barrier QW is just Δ E_Γ-L∼ 25 meV. Moreover, in GaSb-based materials, the density of states and the electron effective mass are lower in the Γ states than in the L states <cit.>. As a consequence, a fraction of hot electrons can be scattered from Γ to L states at the QW, with Γ→ L scattering times of the order of 10^2 fs <cit.>. This short lifetime can reduce τ_e with respect to τ_0. Figure <ref> (a) also shows that, at intermediate temporal ranges (gray region), electron-hole pair relaxation processes prevail, inducing a fast decay characterized by (τ_e-h)^-1≡(τ_e)^-1 + (τ_h)^-1. Yet, at longer temporal ranges (dark gray region), the slow decay is dominated by the longer minority carrier lifetime τ_h. It is worth noting that the tail obtained in these calculations at longer temporal ranges is a trace of residual minority holes that the approximation of τ_0 →∞ precludes from being depleted. Consequently, the boundary of the time window considered in these simulations under such an approximation and particularly at resonance (highest transmission) must be considered with caution since during the experiments, residual holes can be mostly exhausted because of the recombination with resonance-transmitted electrons, making it difficult to observe this tail in the transient response. Simulations also indicate that when the intensity of the calculated transient response diminishes due to low transmission probabilities, as represented by light blue lines, the fast electron-hole pair relaxation dynamics prevail even at shorter temporal ranges (light gray region). Conversely, high intensities provoked by high transmission probabilities as indicated by the black line, not only reinforce the filling process inside the double-barrier QW as depicted by the plateau in the light gray temporal region but also generate a larger amount of remnant carriers at long temporal ranges, which raises the tail of the transient response as observed in the dark gray region. The presence of these carriers can increase the electroluminescence background observed in the experiments (not shown here). In contrast, if τ_0 ∼τ_e as presented in Fig. <ref> (b), only two time-scales can be reproduced, similarly to the transient response of RTD-As presented in Fig. <ref> (a). These time scales correspond to (i) a fast decay (light gray and gray regions) ruled by both electron-hole pair relaxation mechanisms and fast optical recombinations, which hampers filling processes inside the double-barrier QW, and (ii) a slow decay (dark gray regions) mostly influenced by hole lifetimes. In this case, thinner and lower barriers in addition to higher optical emission energies at the double-barrier QW reduce escaping times and radiative lifetimes. Consequently, the effective lifetime remains constant with the applied voltage, showing no modulation with the transmission. §.§ Effective lifetimes correlation with current conditions Since estimations of the experimental effective lifetimes are unavoidably performed by fitting procedures within finite temporal ranges, differences between these ranges in a transient response are crucial for the actual determination and characterization of various lifetimes. Under these conditions, the contribution of longer time scales that appears when the PL intensity is already very low, as occurs at the tails of the transient responses, can be neglected. However, it would be impossible to prevent the dependence of effective lifetimes on the transmission and consequently, on the current source for majority carriers S_e. This apparent drawback is actually an advantage of this procedure since three complementary dynamic conditions can be assessed under low and large currents. If the intensity of the transient response is considered as an exponential decrease, the effective lifetime can be defined as a function of time by τ_eff(t) = -( d/dtlnℐ)^-1, with ℐ=np being the intensity of the emitted light. Thus, the evolution of τ_eff with transmission and time can be extracted from the calculated transient responses, as shown in the color-map of Fig. <ref> (c). Here, τ_0 / τ_e = 10 was assumed and line profiles of τ_eff for three different times, t=2τ_e (dashed line), t=7τ_e (dotted line), and t=13τ_e (dot-dashed line) allow to correlate results with the decay curves in panels (a) and (b). Data before t=τ_e (shaded region) could not be extracted due to the presence of the peak of maximum intensity in the transient responses. The results in Fig. <ref> (c) show the contrast between short, intermediate, and long temporal ranges, as well as the necessary conditions for accessing different time scales. In this way, slow dynamics characterized by high values of τ_eff (red) can be observed when the transmission increases at short (t<5τ_e) and long (t>10τ_e) temporal ranges, also represented by light and dark gray regions respectively in Figs. <ref> (a) and (b). However, at intermediate temporal ranges (5≤ t/τ_e≤10), τ_eff has no dependence on the transmission and reaches minimum values (purple) due to fast electron-hole pair relaxation mechanisms. The dependence of the effective lifetime on transmission conditions enables the investigation of its correlation with the applied voltage. This correlation has been also emulated in Figs. <ref> (a)-(c) by solving the rate equation system for τ_0/τ_e=1, τ_0/τ_e=10, and τ_0/τ_e→∞, respectively. For the calculations, the transmission was assumed as a Lorentzian function of the voltage, as depicted by gray curves in the background, and the effective lifetimes were extracted from the calculated decay curves, at t=2τ_e. The influence of varying the ratio between holes and electrons sources was also simulated for S_h/S_e→ 0 (black lines), S_h/S_e=0.5 (red lines), and S_h/S_e = 1 (blue lines). Dotted lines correspond to the calculation of the effective lifetime just for holes τ^h_eff, for n set as constant in Eq. <ref>. Calculations were carried out for the same S_h/S_e ratios as before, as indicated by the colors of the dotted lines. The dependence of τ^h_eff with voltage allows assessing the holes' contribution to the carriers dynamics, independently on the role of electrons. This segmentation of the carriers contributions is necessary to evaluate their weight within the total effective lifetime. Calculations show that, in all cases, the effective lifetime increases with S_h/S_e, once the minority relaxation dynamics gain relevance. In addition, a large current condition at resonance leads to two opposite effective lifetime responses under applied voltages. The first response is achieved when τ_0/τ_e=1 as depicted by Fig. <ref> (a). Here, effective lifetimes exhibit a dip at the resonance condition, similar to the dip presented by τ^h_eff. Thus, holes relaxation dynamics govern the voltage dependence of τ_eff, slowing down the effective lifetime to values closer to τ_e, while out-of-resonance conditions reveal longer effective lifetimes, higher than τ_e. In double-barrier QWs, where radiative lifetimes are comparable with the electron escape times, the filling-of-states process is inefficient, inducing carrier dynamics ruled by electron-hole relaxation processes during out-of-resonance conditions, and by the escape of electrons at resonance conditions. The second type of response is originated when τ_0/τ_e>1. Large current conditions induce a peak in the effective lifetimes in this case, which become closer to τ_0 for τ_0/τ_e=10, and to τ_h for τ_0/τ_e→∞, as indicated by Figs. <ref> (b) and (c), respectively. Low electric currents during out-of-resonance conditions reduce the effective lifetime close to τ_e since in this situation the dynamics are governed by the escape of majority electrons. However, high electric currents at resonance enhance electron-hole pair relaxation dynamics due to the filling-of-states or intervalley-scattering processes. The influence of varying the electrons escape time on the voltage dependence of the effective lifetimes has been also explored. Figure <ref> (d) shows the results for τ_e→ 0 (red line), τ_e=1 (blue line) and τ_e→∞ (yellow line). For the simulations, S_h/S_e=0.5, τ_h∼ 10^2, and τ_0∼ 10 have been considered. Simulations show that slow (fast) electrons escape processes, which means τ_e→∞ (τ_e→ 0) make optical recombination times (electrons lifetimes) prevail and thus, effective lifetimes present no dependence with the applied voltage. On the contrary, intermediate values of τ_e lead to a competition between optical recombination and electrons escaping processes which depend on the applied voltage, as depicted by the blue line. The former situations may correspond to RTD-As, where electrons relaxation dynamics are expected to be faster than in RTD-Sb, due to its lower barriers, corroborated by the observed short effective lifetimes. § CONCLUSIONS We demonstrate how carrier dynamics in double-barrier QWs are ruled by three different time scales corresponding to radiative recombination processes, fast electron-hole pair relaxations, including intervalley scattering inside the double-barrier QW, and slow minority carrier relaxation mechanisms. Optical recombination time scales dominate in Sb-based systems under high coherent currents (resonance condition), as long as the electrons' escape time is shorter than optical recombination times. This is possible thanks to high band offsets and the small Γ-L energy separation present in Sb-based DBSs. These factors induce a longer electrons' escape time than in As-based DBSs, despite the similarities in the current densities of both systems. Fast dynamics are enhanced for low coherent currents (out-of-resonance conditions) while slow minority carrier relaxation can prevail at resonance conditions, when radiative recombination processes are too slow (τ_0 →∞), thus increasing the observed effective lifetimes in Sb-based systems. The above-presented results account for the correlation between lifetimes and current-density-voltage characteristics, particularly at resonance conditions in Sb-based DBSs. Moreover, the lack of correlation between experimental effective lifetimes and the current density for out-of-resonance voltages indicates that the source of electrons, which affects the electron-hole pair dynamics, comes essentially from the electronic coherent channel. Consequently, tuning the current through the RTD allows assessing these time scales almost independently. The authors are grateful for financial support by BAYLAT and by Brazilian agencies: the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) - grants No. 2013/18719-1, No. 2014/07375-2, No. 2014/19142-2, No. 2014/02112-3, No. 2015/13771-0 and No. 2018/01914-0, the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), the Coordenação de Aperfeiçonamento de Pessoal de Nível Superior -Brasil (CAPES)- Finance Code 001, and the MSCA-ITN-2020 QUANTIMONY from the European Union’s Horizon 2020 programme under Grant agreement ID: 956548.
http://arxiv.org/abs/2307.01876v1
20230704183720
Discovering Asymptotic Expansions Using Symbolic Regression
[ "Rasul Abdusalamov", "Julius Kaplunov", "Mikhail Itskov" ]
cs.SC
[ "cs.SC", "physics.comp-ph" ]
km]Rasul Abdusalamovmycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected] UK]Julius Kaplunov km]Mikhail Itskov [km]Department of Continuum Mechanics, RWTH Aachen University, Germany [UK]School of Computer Science and Mathematics, Keele University, United Kingdom Recently, symbolic regression (SR) has demonstrated its efficiency for discovering basic governing relations in physical systems. A major impact can be potentially achieved by coupling symbolic regression with asymptotic methodology. The main advantage of asymptotic approach involves the robust approximation to the sought for solution bringing a clear idea of the effect of problem parameters. However, the analytic derivation of the asymptotic series is often highly nontrivial especially, when the exact solution is not available. In this paper, we adapt SR methodology to discover asymptotic series. As an illustration we consider three problem in mechanics, including two-mass collision, viscoelastic behavior of a Kelvin-Voigt solid and propagation of Rayleigh-Lamb waves. The training data is generated from the explicit exact solutions of these problems. The obtained SR results are compared to the benchmark asymptotic expansions of the above mentioned exact solutions. Both convergent and divergent asymptotic series are considered. A good agreement between SR expansions and analytical results is observed. It is demonstrated that the proposed approach can be used to identify material parameters, e.g. Poisson's ratio, and has high prospects for utilizing experimental and numerical data. Asymptotic Symbolic Regression Kelvin-Voigt Model Rayleigh-Lamb Waves § INTRODUCTION Nowadays with pretense of data, the field of machine learning (ML) gains a major role in scientific research. Recent ML applications range from reducing measurement errors in quantum computations <cit.> to the acceleration of fluid dynamics simulations <cit.>. At the same time ML based algorithms have certain disadvantages. In particular, ML suffers from a lack of interpretability, since the algorithms employed are "black box" models. It is often difficult to gain qualitative insights into such models and to fully interpret their behavior. In addition, ML training is usually expensive with respect to computational costs and other resources. Furthermore, biased data may result in inaccurate predictions. Finally, ML algorithms may overfit on a limited data set leading to a poor generalization. In recent years, symbolic regression (SR) demonstrated substantial potential in addressing some of the above mentioned disadvantages of ML algorithms <cit.>. The key idea of SR, as described by Augusto et. al. <cit.>, is to establish the structure of an appropriate mathematical model aimed at describing given data. This is achieved by specifying a pool of functions, operations and inputs forming a solution space. The main advantage of this approach is that no a priori assumptions need to be made about the sought for structure of the model. Koza <cit.> introduced an evolutionary computational technique, known as genetic programming, for searching the solution space. A variety of libraries and frameworks have been developed since then, e.g. see <cit.> reporting on their performance. At the moment SR is implemented in many areas, including discovering the governing equations for an elastic Timoschenko beam <cit.>, reconstructing orbital anomalies <cit.>, accelerating the discovery of novel catalysts <cit.> as well as investigating dynamic systems <cit.> just to mention a few. The implementation of the SR technique may greatly benefit from preliminary physical analysis of the tackled problem, including a definition of problem parameters and scaling laws. This is why the asymptotic analysis has a substantial potential in this field, e.g. see <cit.>. Asymptotic analysis is a powerful method for simplifying complex relationships to estimate their limiting behavior. It is hardly possible to make here a proper account of the current state of the art in general area of asymptotic methods. Here, we restrict ourselves by mentioning several influential books on the subject, e.g. <cit.> and references therein. At the same time, asymptotic routines can also get a new powerful impulse from adapting SR. In particular, the calculation of higher order terms may be facilitated in asymptotic expansion even when the exact analytic solution is known but cumbersome. Moreover, SR may be instrumental when the solution is found by a numerical procedure, e.g. using FEM software. In addition, SR appears to be able to extract an asymptotic series from experimental data. Thus, the combination of these two rather different approaches is highly promising for making a substantial impact on the modern research methodology. In this paper we make an initial effort to apply SR to basics problems in mechanics. Each of them has an explicit exact solution and also allows asymptotic expansions in terms of small or large problem parameters. The exact solutions are used for generating artificial training data to discover SR approximations. However, due to the physical origin of the considered examples, the training data can be equally taken from experimental measurements. The benchmark asymptotic series help to evaluate the accuracy of the obtained SR results. The paper is organized as follows. <ref> is concerned with a general introduction into symbolic regression mentioning the prospect for asymptotic series. The simplest example of a two-mass collision problem is considered in <ref>. Despite its simplicity this problem demonstrates three different types of asymptotic behavior. All of them are given by convergent series. An example of a divergent asymptotic series is presented in <ref>, dealing with a viscoelastic Kelvin-Voigt model. Finally, bending wave propagation in an elastic layer is analyzed in <ref>. The previous asymptotic consideration for Rayleigh-Lamb waves, e.g. see <cit.> are adapted for establishing an SR series. In addition, the obtained SR results are applied for the evaluation of Poisson's ratio. A conclusion and outlook are given in <ref>. § THEORETICAL BACKGROUND In the traditional sense, regression is a statistical technique that identifies the relationship between a single dependent variable and one or more independent variables. Typically, an a priori model structure, such as a linear model, is used to determine the best fit for a given set of data. Predefined parameters of the model are optimised. In the case of symbolic regression, no assumptions are made about the model structure or type. SR finds the ideal structure and the relationship between the independent variables and the dependent variable <cit.>. The result is an algebraic expression that optimally describes the given data set. Typically, such expressions can be described in the form of graphical trees that place operations, constants and inputs in hierarchical relationships (see <ref>). To find an optimal formulation, most symbolic regression frameworks use genetic programming (GP), an evolutionary computation algorithm. This approach was originally introduced by Koza <cit.> and utilizes a hierarchical function definition to automatically and dynamically identify potentially candidates. By generating a population of possible solutions and evolving them over a specified number of generations a large search space can be searched in an efficient way. So far SR has been used for a variety of different applications such as material modeling <cit.> or the discovery of physical relationships <cit.>. GP algorithms can be split up typically into four different phases: initiation, selection, evolution and termination. In the first phase an initial set of expressions is randomly created from a predefined set of possible mathematical operations, independent variables and functions. This initial set is competing in tournaments during the selection phase. In this way, random subsets are formed and the fittest individual of each subset is determined. In the next phase, the fittest individuals evolve. There are several types of mutations available e.g. crossover, subtree, point or hoist mutation. In the case of crossover a new individual is formed from a preliminary selected parent and a donor. To this end, a random subtree of the parent is replaced by a subtree of the donor (see <ref>). The subtree mutation is very similar to crossover, however, only a single parent is needed. In this case, a random subtree is replaced by a random new term allowing to reintroduce forgotten operations, functions or inputs (see for example <ref>). Point mutation is an evolution of a single vertex of a tree, see <ref>. A function, operator or input is replaced with another one. This mutation form also allows to reintroduce lost functions, operations or inputs. The last mutation type called hoist mutation is visualized in <ref>. The goal is to reduce the length of a tree. A random subtree is selected and replaced with a subtree of itself. The selection and evolution process continues until the termination phase. A termination can happen in two ways: either a specified number of generations has been reached or a specified fitness criteria is fulfilled. Note that this procedure does not guarantee to find any optimal solutions. Nevertheless, the overall fitness of the population improves over the number of generations. Additionally, due to the random character of this approach a deterministic solution is not given. For this work, the Python package is used <cit.>. For most applications an additional constraint is to restrict the length of the generated expressions. Usually this is done by introducing a Lagrange multiplier as a penalty term into the calculated fitness. To discover asymptotic expansings using symbolic regression, a restriction is counterproductive for developing an asymptotic series. In the parsimony coefficient is responsible to keep the length of the expression small and will have a maximum value of 1e-6. For the implementation it is necessary to mention that generally symbolic regression has a high sensitivity on hyper parameters. For all examples considered in the following analytical exact solutions are used to generate artificial input data. Asymptotic analysis is a powerful mathematical technique, often used to simplify complex relations for estimating the limiting behavior of interest e.g. see <cit.> and references therein . This is usually done by identifying a large/small problem parameter and expanding the sought for solution in term of the series involving this parameter. In the simplest example of a given function f(ε) that depends on a small parameter ε the asymptotic expansion can be written as f(ε) = f_0 + f_1 ε + f_2 ε^2 + f_3 ε^3 + ... , where f_i for i=0,1,2, ... are the coefficients to be found. In the general case a series of this type can be divergent and its performance strongly depends on the value of the parameter. The evaluation of these coefficients, especially of the higher order ones is often a challenge. The goal of this paper is to adapt an SR approach for determining these coefficients as well as even the powers of the relevant parameter. Moreover, for more general expansions considered in the paper the SR approach is adapted for establishing the basics functions appearing in the asymptotic series. Below, we present few examples of physically inspired problems originated from mechanics to illustrate the peculiarities of the proposed methodology. The derived SR series are compared with benchmark asymptotic expansions approximating the exact solutions of the studied problems. § COLLISION PROBLEM In this section we will discuss an illustrative example of the collision of two bodies of mass m_1 and m_2 as shown in <ref>. For this example, mass m_1 has a prescribed initial velocity v_0, while mass m_2 is standing still. After the collision, the masses m_1 and m_2 have the velocities v_1 and v_2, respectively, which are unknown and have to be found. The balance of linear momentum and the balance of kinetic energy are given by m_1 v_0 = m_1 v_1 + m_2 v_2 and m_1 v_0 ^2/2 = m_1 v_1 ^2/2 + m_2 v_2 ^2/2 . The solution of these equations is of the form: u_1 = δ - 1/δ + 1 and u_2 = δ (1 - u_1) , where δ = m_1/m_2 and the dimensionless quantities u_1 = v_1 / v_0 and u_2 = v_2 / v_0. The first of these relations can be expanded into an asymptotic series for three limiting behaviors, including δ≫ 1, δ≈ 1 and δ≪ 1. The strong inequality δ≪ 1 is related to the collision of a mass m_1 with an almost rigid wall. The case δ≈ 1 corresponds to the impulse transfer through masses of almost the same weight. The strong inequality δ≫ 1 governs the collision of a large mass m_1 with a small mass m_2. The small parameters for each of these three scenarios are δ≪ 1, θ = (δ - 1)/2≪ 1 and η = 1/δ≪ 1. The associated converging asymptotic series become u_1(δ̅) = -1 + 2δ̅ - 2δ̅^2 + 2 δ̅^3 - 2 δ̅^4 + ... , u_1(θ) = θ - θ^2 + θ^3 - θ^4 + ... , and u_1(η) = 1 - 2η + 2η^2 - 2η^3 + 2η^4 + ... . Now assume that the coefficients as well the powers of the small parameters in the series above are unknown and try to determine both of them using SR. Therefore, data is generated from the exact solution in <ref> for the respective domains. To this end, we implement two strategies. The first strategy starts from the chosen small parameter only, i.e. δ, θ or η, specifying it as an input. Alternatively, the inputs can be given in the form of several powers of the small parameter, e.g. for series (<ref>) we can provide the input as {δ, δ^2, δ^3}. Both strategies were successfully implemented, see <ref>. The discussion below is mainly restricted to the first strategy due to similar outcomes for the two setups in question. Although more inputs are provided for the second strategy, the performance does not necessarily improve. In <ref> we demonstrate the results for only 20 data points specified as the training data for each of limiting setups (<ref>)-(<ref>). The discovered asymptotic expansions with the best fitness are depicted. The convergence of the expansions to the exact solution is shown. The best fits for all three series are listed in <ref>. The randomness of the initial population of the algorithm requires 5 symbolic regressions. In this case, the fitness with respect to the number of generations as well as the best fit of all 5 samples is determined and listed in <ref>. It's worth noting that all automatically constructed asymptotic expansions are close to the exact solution outside a relatively narrow domain, where the training data is given. This is clear from <ref>, depicting the exact solution, along with its symbolic regression approximation and the training data. Here, only 20 data points calculated by formula (<ref>) are taken. The convergence to the exact solution for δ≫ 1 and δ≈ 1 is illustrated. The sought for coefficients in the determined asymptotic series given above are nearly identical to their exact values up to the 17 order, see <ref>; see also the benchmark coefficients in the expansions (<ref>)-(<ref>). § KELVIN-VOIGT VISCOELASTIC SOLID Next consinder a more evolved example arising from the viscoelastic Kelvin-Voigt model, e.g. see <cit.>. Let us start from the constitutive relation (<ref>) σ = E ε + D d ε/d t . where σ is the stress, ε is the strain, E is the stiffness of a spring (Young's modulus) and D the viscosity of a damper while t denotes time. This formula can be rewritten in an integral form as ε(t) = ε(0) exp(- θ t) + exp(-θ t)/D∫_t_1 = 0^tσ(t_1) exp(θ t_1) d t_1 , with θ = E/D. Introduce a typical time scale T assuming that t ≫ T. The limiting large time behavior of the last formula becomes ε(τ) = ε(0) exp(- δτ) + T/Dexp(-δτ) ∫_τ_1 = 0^∞σ(τ_1) exp(δτ_1) d τ_1 , where δ = θ T and τ = t/T. Next assume that the studied Kelvin-Voigt solid is loaded by the stress depending on time τ as follows σ̃(τ̃) = σ_0exp((1 - δ̃) τ̃ )/1 + δ̃τ̃ , where σ_0 is a prescribed amplitude. In this case, formula (<ref>) can be reduced to ε(t) = (ε(0) + Tσ_0/D I(δ)) exp(- δτ) , with (see <cit.>) I(δ) = ∫_0^∞exp (x) /1+ δ̃ x d x = e^1/δ/δΓ(0, 1/δ) , where Γ denotes the Gamma function. The integral I(δ) can be expanded into two different asymptotic series at δ≪ 1and δ≫ 1. However, in contrast to the previous example the first of them appears to be divergent. The small parameters for each of these scenarios are δ≪ 1 and η = 1/δ≪ 1. They are given by, e.g. see <cit.> and references therein, I(δ) = 1 - δ + 2 δ^2 - 6 δ^3 + 24 δ^4 + ... , and I(η) = η (-γ - logη) + η^2 (1 - γ - logη) + η^3/4 (3 - 2 γ - 2 logη) + η^4/36 (11 - 6 γ - 6 logη) + ... . Therein, γ is the Euler–Mascheroni constant. In this section we adapt the symbolic regression for the construction of the analogs of the series in (<ref>) and (<ref>) using the exact formula for the integral I(δ) for generating training data. The main focus below is on the effect of the divergent behavior of <ref>. The numerical results are displayed in <ref>. Note that for the SR expansions for the case η = 1/δ≪ 1 the inputs have been provided with {η, logη}. As expected, in contrast to convergent series, there is a natural threshold for the number of terms in divergent series which can be reproduced with a required accuracy over the chosen domain of the small parameter, see <ref>. The SR higher order terms strongly deviate from their counterparts in benchmarked asymptotic divergent expansions, see also Equations <ref> and <ref>. This is quite obvious, as symbolic regression aims at achieving the best possible accuracy of the provided data, which is not always feasible when using divergent asymptotic series. The deviation of the benchmark and SR expansions from the exact solution is plotted using the relative root mean square error (RRMSE) in <ref> vs. the parameter δ and the highest order n of the retained term in the analyzed series for the case δ≪ 1. In <ref> and also in the next <ref>, the range of the small problem parameter is specified as 2e-4≤δ≤0.2 while n ≤ 8. It appears that there exists an optimal order n for the analytical divergent series related to the most accurate approximation at the given value of δ. In particular, n=4 in <ref> corresponds to δ = 0.2. In this case, however, the accuracy of the SR series is higher approaching a plateau as the order n increases. § ELASTIC BENDING WAVE As the final example, we consider Rayleigh-Lamb waves propagating along an elastic layer of thickness 2h with traction free faces (<ref>), see the original papers <cit.> . The equation of motion in cartesian coordinates x_1,x_2,x_3 is given by (here and below in this section see <cit.> for more details) E/2 (1+ ν)Δu + E/2(1 + ν)(1 - 2ν)∇(∇·u) - ρ∂^2 u/∂ t^2 = 0 , where E is Young's modulus, ν is Poisson's ratio, ρ the mass density and t time. We restrict ourselves to the plane-strain problem in the plane x_1-x_3. In this case, the displacement vector is given by u = (u_1, 0, u_3). Then the boundary conditions along the faces x_3 = ± h become ∂ u_3/∂ x_1 + ∂ u_1/∂ x_3 = ν/1-ν∂ u_1/∂ x_1 + ∂ u_3/∂ x_3= 0 . The associated dispersion relation for antisymmetric traveling waves with angular frequency ω and wavenumber k can be written as γ^4 sinhα/αcoshβ - β^2 K^2 coshαsinhβ/β = 0 , with γ^2 = K^2 - 1/2Ω^2 , α^2 = K^2 - κ^2 Ω^2 and β^2 = K^2 - Ω^2 , where the dimensionless circular frequency Ω = ω h/c_2, the dimensionless wavenumber K = kh, the shear wave speed c_2 = √(E/2(1+ν)ρ) and κ = √(1-2ν/2-2ν). This equation can be solved numerically, e.g. using a nonlinear solver in Python. Below the numerical results generated by Python, are used as training data. We also present the low-wave frequency of the fundamental antisymmetric, i.e. bending, mode see also <cit.>, given by K^4_n = 3/2 (1 - ν) Ω^2∑_j=0^n A_jΩ^j , where the first four coefficients A_j take the form A_0 = 1 , A_1 = χ17 -7ν/15(1-ν) , A_2 = 1179 -818ν +409ν^2/2100(1 - ν), A_3 = χ5951 - 2603ν +9953ν^2 - 4901ν^3/126000(1-ν)^2 , with χ = √(3(1-ν)/2). The best SR appproximation is depicted in <ref> along with the exact solution and the asymptotic series from formula (<ref>) at the orders n=0,1,2,3. The aforementioned SR approximation has the same coefficients at n=3. Two latter are computed at ν = 0.3455, whereas Poisson's ratio is not specified as an input for the SR approximation. Note that using the determined SR values of the coefficient A_1 or A_2 we may restore the unknown Poisson ratio by the following formulae ν(A_1) = 119-75 A_1^2 + 5 √(15)√(15 A_1^4 - 28 A_1^2)/49 or ν(A_2) = 409 - 1050 A_2 + √(70)√(15750 A_2^2 - 4499)/409 . This seems promising for the evaluation of Poisson's ratio from experimental data, e.g. see <cit.>. The convincing results for the considered example are exposed in <ref>. § CONCLUDING REMARKS In this paper we developed a robust framework to obtain SR asymptotic expansions, illustrated by three problems in mechanics. The proposed methodology was first adapted for an initial simple setup of a two-mass collision problem resulting in the asymptotic expansions expressed through the convergent series corresponding to three limiting behaviors. The SR approximations matched the benchmark analytic expansions up to the 17 order for all mass ratios despite the well-known sensitivity of SR hyperparameters. The example of a Kelvin-Voigt solid is considered to illustrate the peculiarities of SR series for an originally divergent asymptotic expansion. Its observed that the accuracy of an SR expansion makes it superior over the associated optimal analytic series. The last example is concerned with a bending wave propagating along an elastic layer. The long-wave low frequency asymptotic behavior was tackled. A possibility of using the obtained SR results for the evaluation of the unknown Poisson ratio is indicated. An "asymptotic" way of thinking is adapted for SR implementation. Although all the training data in this paper are generated using explicit exact solutions, this approach can equivalently be applied to data captured from experiments. Another natural possibility is to make use of numerical data, e.g. FEM simulations. It is remarkable that the obtained SR expansions were discovered from very few data points in comparison with alternative ML techniques. Above we restricted ourselves to basic types of asymptotic behavior. A follow up program may include multi-parametric asymptotic analysis, matched asymptotic expansions as well as Padé approximations and other more involved techniques. § ACKNOWLEDGEMENT Julius Kaplunov gratefully acknowledges the support of the Alexander von Humboldt Foundation which made possible his three months visit to the Department of Continuum Mechanics at RWTH Aachen University in summer 2021. § APPENDIX § COLLISION PROBLEM § KELVIN-VOIGT VISCOELASTIC SOLID § ELASTIC BENDING WAVE
http://arxiv.org/abs/2307.03020v1
20230706143451
Particle current, noise, and counting statistics of quantum transport in the presence of a single-particle loss
[ "Shun Uchino" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.mes-hall" ]
http://arxiv.org/abs/2307.02389v1
20230705155753
A remark on the quantum complexity of the Kronecker coefficients
[ "Christian Ikenmeyer", "Sathyawageeswar Subramanian" ]
quant-ph
[ "quant-ph", "math.RT" ]
Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions Akira Ono August 1, 2023 =============================================================================================================== We prove that the computation of the Kronecker coefficients of the symmetric group is contained in the complexity class . This improves a recent result of Bravyi, Chowdhury, Gosset, Havlicek, and Zhu. We use only the quantum computing tools that are used in their paper and additional classical representation theoretic insights. We also prove the analogous result for the plethysm coefficients. Keywords: Kronecker coefficients, plethysm coefficients, QMA, #BQP, #P § INTRODUCTION We study the Kronecker coefficients of the symmetric group, i.e., the representation theoretic multiplicities k(,μ,ν) in the decomposition of the tensor product of Specht modules into irreducible representations: []⊗[μ] = ⊕_ν [ν]^⊕ k(,μ,ν), where ,μ,ν⊢ n are partitions of n. Recently, Bravyi, Chowdhury, Gosset, Havlicek, and Zhu <cit.> proved that the computation of the product k(,μ,ν)· d()· d(μ)· d(ν) is in the complexity class , where d() = [] is the number of standard tableaux of shape . The input partitions are given in unary, as for example also in <cit.> or <cit.>, and we also use this convention in this paper. We refine the technique from <cit.> and tighten their result as follows. The map :(,μ,ν)↦ k(,μ,ν) is in . We also obtain an analogous result for the plethysm coefficient a_(d,m), which is the representation theoretic multiplicity of the irreducible (V)-representation S_(V) in the nested symmetric power S^d(S^m(V)). The map :(d,m,)↦ a_(d,m) is in . The question whether Pleth∈ is a formalization of question 9 in Stanley's list of open positivity problems in algebraic combinatorics, <cit.>, see also <cit.>. The question whether Kron∈ is a formalization of question 10 in <cit.>. It has been resolved in several subcases, see <cit.>. The question has been explicitly mentioned again in <cit.>, and in the surveys <cit.>. For the plethysm problem, expressions have been found in some cases as well <cit.>. Both coefficients play an important role in geometric complexity theory, an approach towards computational complexity lower bounds via algebraic geometry and representation theory <cit.>. §.§ No rescaling after the computation Let {0,1}^* denote the set of finite length bit strings. Recall that is the class of functions f:{0,1}^*→ such that there exists a nondeterministic polynomial time Turing machine M such that for all inputs w∈{0,1}^* the number of accepting paths of M is exactly f(w). Let ScaledKron be the function (,μ,ν)↦ d()· d(μ)· d(ν)· k(,μ,ν). <cit.> prove ScaledKron∈, see the definition in <ref>, but they do not prove that Kron∈. Since d() can be computed in polynomial time via the hook length formula, it follows that if Kron∈, then also ScaledKron∈, but the converse is not clear, because crucially does not allow any division after the counting. In this section we list several examples of functions in that are multiples of simple functions (the function is always constant in these examples), but when divided by those, they are not expected to be in , or at least it is wide open; see also the discussion in <cit.> on this recent topic. We discuss these examples to explain our motivation to refine ScaledKron∈ to Kron∈. These examples are not necessary for understanding our results. We first start with a naive example. The standard example where a function can be divided by a constant and still remain in is #3Coloring: Given a graph with at least one edge, determine the number of ways to color its vertices with 3 colors, so that no two adjacent vertices have the same color. Since 3Coloring∈NP, clearly #3Coloring∈. Clearly ∀ G: #3Coloring(G) is divisible by 6, because the symmetric group _3 acts on colorings by permuting the colors (here we used that G has at least one edge). We define #3Coloring/6 via #3Coloring/6(G) = #3Coloring(G)/6. We have #3Coloring/6∈, because for each 3-coloring c it is easy to generate the set _3 c of all six 3-colorings, and decide if c is the lexicographically smallest element in _3 c or not. The Turing machine for #3Coloring/6 only accepts the 3-colorings c that are lexicographically smallest in _3 c. But for other problems it might not always be possible to easily generate the corresponding groupings from which one wants to select the lexicographically smallest element. For an edge e in a graph G let #HCE(G,e) denote the number of Hamiltonian cycles of G that use e. Clearly, #HCE∈. Smith's theorem <cit.> says that for any fixed edge e in a cubic graph G we have that #HCE(G,e) is even. However, finding a polynomial time self-inverse algorithm that maps one Hamiltonian cycle to its partner is a major open question. In particular, it is an open question whether or not #HCE/2∈. The Price-Thomason lollipop algorithm <cit.> for mapping a Hamiltonian cycle to its partner uses exponential time in the worst case <cit.>. The algorithm explores the so-called exchange graph X of a graph G: A vertex in X corresponds to a Hamiltonian path or cycle in G. <cit.> list more problems of this kind, and they write: Each proof consists of describing an “exchange graph” X, quite large compared to G. [...] Each of these theorems is not so easy to prove without seeing the exchange graph. In the same vein, in <cit.> the problem #COUNTALL-PPA_Leaf is defined as follows: Given two Boolean circuits C_1 and C_2, each with n inputs and n outputs, let N(v):={C_1(v),C_2(v)}. Define a graph G on the vertex set {0,1}^n by defining that v∈{0,1}^n and w∈{0,1}^n are adjacent iff v≠ w and v ∈ N(w) and w ∈ N(v). Clearly, each vertex in G is either isolated, or of degree 1, or of degree 2. On input (C_1,C_2), the function #COUNTALL-PPA_Leaf outputs the number of degree 1 vertices of G. This number is always even, but there exists an oracle A such that (#COUNTALL-PPA_Leaf)^A/2∉^A. Another example is a promise problem using Karamata's inequality, which is also an even function, and when divided by 2 there is an oracle separation from , see <cit.>. In conclusion, in general it is not always the case that membership in a counting class is preserved under division by even very simple functions. Therefore it is valuable to refine counting results of scaled functions, such as ScaledKron∈ to Kron∈. If our refinement to Kron∈ in Theorem <ref> would not have been easily possible, then it would have been justified to conjecture that Kron shares the fate of the problems listed in this subsection, which would have given a justification to conjecture Kron∉. §.§ Remarks Even without our refinement, one can remark the following. For any function f ∈, if the corresponding vanishing problem [f = 0] is -hard, then the polynomial hierarchy collapses to the second level, i.e., PH = Σ^_2, see Proposition 3.1.1 in <cit.>. Let us assume for a moment that this approach works for Kron, i.e., assume that one could prove that [Kron=0] is -hard, i.e., [Kron>0] is -hard. Since [Kron>0] ∈QMA by <cit.>, this would imply ⊆. See <cit.> for the relationship between and . Therefore, a proof of Kron∉ cannot just use the techniques in <cit.> without proving new results in quantum complexity theory. Even proving Kron∈ is a major open problem, and also its quantum verifier analogue of counting witnesses: classical polynomial sized witnesses for Kron that can be verified by a machine. Since these classes work with classical witnesses, from a combinatorial perspective they seem more useful than , which is about “counting quantum witnesses”. There exists a randomized classical oracle A with ^A ⫋^A, see <cit.>. The statement Kron∈ has been remarked without proof in <cit.>, but as far as we know <cit.> is the first publication about the subject. § PRELIMINARIES §.§ Representation theory We work over the complex numbers . Let _n denote the symmetric group on n symbols. A monotone nonincreasing finite sequence of nonnegative integers is called a partition. For a partition we write || := ∑_i∈_i. We also write ⊢ n to indicate that is a partition with ||=n. The length of a partition is defined as ℓ() := max{j |_j > 0}. Let ^T denote the transpose partition of , which is defined via ^T_i := |{j |_j≥ i}|. The Young diagram of a partition is defined as the set {(i,j)| 1 ≤ i ≤^T_j, 1 ≤ j ≤_i }, which is usually depicted as a top-left justified set of boxes with _i boxes in row i. For example, the Young diagram for (5,3) is smalltableaux5,3. A Young diagram with n boxes can be encoded for example as a sequence of n-1 bits, where the Young diagram is read row-wise from left to right and top to bottom, and a 1 represents a line break in the Young diagram, while a 0 represents the absence of a line break. The Young diagram of ^T is obtained from the Young diagram of by reflecting it about the main diagonal. A representation of _n is a finite dimensional complex vector space V with a group homomorphism ϱ:_n→(V). A representation is called irreducible if it has no nontrivial subrepresentations. The irreducible representations of _n are indexed by partitions ⊢ n, see e.g. <cit.>. For ⊢ n we denote by [] the irreducible representation of _n of type , called the Specht module. For example, [(n)] is the trivial representation, and [(1,1,…,1)] is the sign representation. For an _n-representation V let V^ denote its -isotypic component, i.e., the direct sum of all its irreducible representations of isomorphism type . Hence, with this notation, if V is an _n-representation, then the linear subspace of _n-invariants is denoted by V^(n). If we have two commuting actions of _n on V, then we write V^,∗ for the -isotypic component of the first action, and V^∗, for the -isotypic component of the second action, and V^,μ = V^,∗∩ V^∗,μ for the (,μ)-isotypic component for the action of the product group _n×_n. The group algebra [_n] is the n! dimensional vector space of formal linear combinations of permutations from _n. On [_n] we have two commuting actions, ·_ and ·_, as follows. Let π∈_n and let ∑_σ∈_nc_σ σ∈[_n]. We define * π·_ (∑_σ∈_nc_σ σ) := ∑_σ∈_nc_σ πσ, and * π·_ (∑_σ∈_nc_σ σ) := ∑_σ∈_nc_σ σπ^-1. This gives an action of _n×_n on [_n] and its decomposition into irreducibles is well known, e.g., <cit.> (note that Specht modules are self-dual, because the characters of the symmetric group are real-valued): [_n]≃⊕_⊢ n []⊗[]. Hence, [_n]^,∗ = [_n]^∗, = [_n]^,. We embed _n↪_n×_n×_n via π↦(π,π,π). Given partitions , μ, ν, using this embedding, the Kronecker coefficient k(,μ,ν) is defined as the dimension of the _n-invariant space k(,μ,ν) := ([]⊗[μ]⊗[ν])^(n). For several equivalent definitions, see e.g. <cit.>. <cit.> use the character theoretic characterization k(,μ,ν) = 1/n!∑_π∈_nχ_(π)χ_μ(π)χ_ν(π), but in this paper we put more focus on invariant spaces instead. For a partition ⊢ n, the Young subgroup _⊆_n is defined as _ := __1×__2×⋯_ℓ() where the embedding is the usual one: Recall that _n is the group of bijections from {1,…,n} to itself, whereas each factor __i is the group of bijections from {_1+⋯+_i-1+1,…,_1+⋯+_i} to itself. For example, _(2,2) = {,(1 2),(3 4),(1 2)(3 4)} is isomorphic to the Klein four-group. A semistandard tableau of shape is defined as a labeling of the boxes of the Young diagram of with positive integers, such that the numbers strictly increase down each column, and are non-decreasing along each row. The content of a semistandard Young tableau t is the vector that records at position i the number of occurrences of label i in t. The Kostka number K_,μ is defined as the number of semistandard tableaux of shape and content μ. For example, K_(3,1),(2,1,1)=2 because there are two semistandard tableaux of shape (3,1) and content (2,1,1): smalltableaux112,3 and smalltableaux113,2. Clearly, K_,=1, because there exists a unique semistandard tableau of shape and content : The tableau with only entries i in row i. For a partition and an _n-representation V let V^_⊆ V denote the linear subspace of _-invariants. The following lemma is well known, but we give a proof based on Pieri's rule and invariants for the sake of completeness and because this is the main driving lemma of our Theorems <ref> and <ref>. For all ,μ⊢ n we have ([]^_μ) = K_,μ. In particular, ([]^_)=1. We will use the fact that the _μ-invariants can be iteratively obtained: []^_μ = ((…([]^(μ_ℓ(μ)))^(μ_ℓ(μ)-1))^⋯)^(μ_1). For μ⊆ we write μ⊏ if for all i we have ^T_i-μ^T_i ≤ 1. Pictorially, the set of boxes ∖μ contains at most 1 box in each column. Let i = μ_ℓ(μ), j=n-i, _j×_i↪_n. We use Pieri's rule, <cit.>, which states how the _i-invariant space []^_i of the irreducible _n-representation [] decomposes into irreducible _j-representations. []^_i = ⊕_μ⊢ j, μ⊏ [μ] We take a Young diagram of shape and mark the boxes in the difference ∖μ with ℓ(μ), and then repeat this process on each summand [μ] for i=μ_ℓ(μ)-1, and n decreased by i. We continue in this way, and the process finishes when the boxes are marked with 1. All boxes are then marked and the result is a semistandard tableau of shape and content μ by construction. It is clear that for each semistandard tableau there is exactly one way to obtain it as the output of this process. §.§ Quantum algorithms and #BQP We review some quantum computing preliminaries and notation, and refer to standard textbooks such as <cit.> for details. Let _̋n:=⊗^n^2 be the Hilbert space of states of n-qubits spanned by the computational basis {|x⟩| x∈{0,1}^n}, and denote N:=_̋n=2^n. Pure states of an n-qubit quantum system are projective vectors of unit ℓ_2-norm in $̋, where by projective we mean that for all vectors|ψ⟩∈$̋, we identify ^θ|ψ⟩ with |ψ⟩ for all θ∈[0,2π). Let ⊂ U(N) be a universal gateset and let _n∈ U(N) be the N× N identity matrix. That is, for every U∈ U(N) and ϵ>0 there exist k=k(ϵ,n)∈ and g_1,…,g_k∈ such that U_k:=g_k g_k-1… g_1 is ϵ-close to U in spectral norm. We call any product C=g_s… g_1 with ∀ i:g_i∈ a quantum circuit of size s in the gateset . We will henceforth work with standard gatesets consisting of finitely many one and two qubit gates. Any decomposition =̋_̋1⊕_̋2⊕…⊕_̋k for k≤ N defines a measurement on $̋, but not all measurements can be performed efficiently. In this paper we will only be interested in Projection Valued Measures (PVM), defined by a collection of idempotent Hermitian endomorphismsM={Π_1,…,Π_k}so that for eachi∈1,…, kwe haveΠ_i=̋_̋i(i.e.Π_iprojects onto_̋i), andΠ_iΠ_j=δ_ij Π_j, i.e. the projections are pairwise orthogonal. All projectors that we will use in this paper are Hermitian,. Measuring a pure state|ψ⟩byMproduces as output a random variableXtaking values1,…,kwith probability(X=i)=⟨ψ|Π_i|ψ|$⟩, and a post-measurement state |ψ_i⟩:=Π_i|ψ⟩/(X=i). Note that since Π_i is a projector, ⟨ψ|Π_i|ψ|=⟩Π_i|ψ⟩^2. A computational basis measurement of _̋n is the PVM defined by {Π_x=x| x∈{0,1}^n}. A single qubit computational basis measurement is called accepting if the outcome is one, and rejecting otherwise. We now formally define the two main complexity classes that the results of this article are concerned with. QMA: A language L∈{0,1}^* is in the class if there is a polynomial time Turing machine that on input w, |w|=n, outputs a quantum circuit C_w acting on m+a qubits with m,a= n, such that for every n∈ and every w∈{0,1}^n we have that * C_w takes as input |ψ⟩∈_̋m and a many ancillary qubits in the initial state |0^a⟩∈_̋a and outputs the single bit outcome of measuring the first qubit in the computational basis {Π_0,Π_1}, and * (completeness) if w∈ L, then ∃|ψ_w⟩∈_̋m such that Π_1 C_w |ψ_w⟩|0^a⟩^2≥ 2/3, and * (soundness) if w∉ L, then ∀|ψ⟩∈_̋m, Π_1 C_w |ψ⟩|0^a⟩^2≤ 1/3, where for convenience we have used Π_1 to implicitly mean Π_1⊗_m+a-1. If in the second and third bullet point in the definition of we restrict the witness states |ψ_w⟩ and |ψ⟩ to be classical, i.e. basis vectors {|y⟩| y∈{0,1}^m} in the computational basis, then we get the class . #BQP: A function f:{0,1}^*→ is in if there is a polynomial time Turing machine that on input w, |w|=n, outputs a quantum circuit C_w acting on m+a qubits with m,a= n such that there exists a decomposition _̋m = 𝒜_w ⊕ℛ_w of _̋m into linear subspaces of so-called accepting and rejecting witnesses such that * Π_1 C_w|ψ⟩|0^a⟩^2 ≥ 2/3 for all |ψ⟩∈𝒜_w, and * Π_1 C_w|ψ⟩|0^a⟩^2 ≤ 1/3 for all |ψ⟩∈ℛ_w, and * f(w)=𝒜_w. See <cit.> for a more detailed discussion of this complexity class. The circuit C_w can be considered the quantum verifier in a protocol, and f(w) is the dimension of the subspace of witnesses accepted with probability at least 2/3. The acceptance and rejection conditions above are described by an operator E_w:=(_m⊗⟨0^a|)C_w^†(Π_1⊗_m+a-1)C_w(_m⊗|0^a⟩) on _̋m, which is a positive operator because _m⊗|0^a⟩ is an isometric embedding _̋m↪_̋m+a, C_w is a unitary on _̋m+a, and Π_1⊗_m+a-1 is a projector on _̋m+a. However this operator might not in general define a PVM on _̋m; but for our purpose we will construct it to be one, as in <cit.>. Consequently, when E_w is a projector, 𝒜_w= E_w, 𝒜_w= E_w. In this case, C_w accepts witness states in 𝒜_w with certainty, and rejects states in ℛ_w with certainty, corresponding to the class _1^1 with perfect soundness and completeness. It is known that there is a quantum oracle relative to which _1≠ for the case of perfect completeness <cit.>. Interestingly, it is known in contrast that _1=, under any gateset in which the Hadamard and reversible classical operations can be implemented exactly <cit.>. We say that a PVM {Π_i} on n-qubits can be efficiently implemented if we can construct a polysize unitary circuit U_i acting on n+a many qubits where a= n such that on an arbitrary input state |ψ⟩|0^a⟩∈⊗^n+a^2 the circuit sets a designated qubit to 1 with probability p_i:=⟨ψ|Π_i|ψ|$⟩. To be more precise, the circuit implements a unitary map|ψ⟩|0^a⟩|0⟩↦√(p_i)|ψ⟩|0^a⟩|1⟩ + ^θ√(1-p_i)|ψ⟩|0^a⟩|0⟩whereθ∈[0,2π)can be an arbitrary phase. Note that usingn^2bits one can encode a permutationπ∈_nas its permutation matrix, or more specifically, as itsn^2entries when reading row-wise from left to right. Hence, for our Kron problem,[_n]can be embedded into_̋mwithm = n^2such that any functionf ∈[_n]with||f||_2 = 1can be written in the computational basis as |f⟩ = ∑_σ∈_nf(σ)|σ⟩. Here,|σ⟩is shorthand for|enc(σ)⟩, withenc(σ)being them-bit string that encodesσ. Similarly, the input register^3_̋mof the quantum algorithm can be thought of as[_n]⊗[_n]⊗[_n]. The basis vectors in the Fourier basis[_n]≃[]⊗[]are indexed by triples(,i,j), where⊢nandiandjare standard tableaux of shape. The quantum Fourier transform maps a basis vector in the Fourier basis to a basis vector in the computational basis, with a suitable computational basis encoding for the triple(,i,j). We write⇉to denote a Hermitian idempotent linear map, i.e., a Hermitian projector. Note that if two Hermitian projectors commute, then their composition is again a Hermitian projector. <cit.> explain how the following two Hermitian projectors can be efficiently implemented: * Weak Fourier sampling Π^ : [_n] ⇉[_n]^, = [_n]^,∗ = [_n]^∗, ≃ []⊗[]. The set of pairwise orthogonal projectors M={Π^|⊢ n} is a PVM on _̋m and can be implemented using the Weak Fourier sampling circuit <cit.> that acts on _̋m and an ancillary register _̋n-1 into which the partition label can be copied. A measurement is performed on the ancilla and the circuit rejects unless the outcome is ⊢ n. If the procedure on input |ψ⟩ accepts, then the post-measurement state of the _̋m register is proportional to Π^|ψ⟩. * For any _k-representation (V,ϱ) we have the generalized phase estimation procedure P^_k : V ⇉ V^(k). Here we require that V has a basis B = {v_i} labeled by polynomially sized labels (v_i), and that ∀π∈_k : π v_i ∈ B, and there is a uniform classical circuit that on input (π,(v)), where π∈_k, outputs (π(v)). This will be obvious in all our cases: The product of transposition matrices can be computed using a small circuit, and the inverse of a transposition matrix (needed for the right action) is just its transpose. The projector P^_k can be implemented using the generalized phase estimation circuit <cit.> that acts on _̋m⊗_̋v where the first register is interpreted as an ancillary register, and the second one encodes V. At the end it implements Π^(n) on the ancilla: The procedure rejects unless the measurement outcome is the trivial representation (n). If the procedure on input |ψ⟩ accepts, then the post-measurement state of the _̋v register is proportional to P^_k|ψ⟩. Non-adaptive intermediate measurements in a circuit can be deferred to the end without affecting the probability distribution of the output. The multiqubit measurements required in the above two cases can instead be replaced by a postprocessing circuit that checks if the registers to be measured are in the required state, and sets a single ancilla to|1⟩if yes and|0⟩otherwise. We note that a subtlety arises in our use ofP^_k: The action will be an action of different subgroups_k⊆_non[_n], sometimes from the left, and sometimes from the right. Both actions are readily implementable using small circuits. Even though more general constructions are possible (see e.g. <cit.>), we only use these two constructions as building blocks to achieve our goal, but we make a refined use of generalized phase estimation. § PROOF OF THEOREM <REF>: THREE-STEP CONSTRUCTION To indicate more clearly which tensor position is which (in particular when reordering the positions), we write[_n]_1⊗[_n]_2⊗[_n]_3instead of just[_n]⊗[_n]⊗[_n]. We will construct an idempotent endomorphism of[_n]_1⊗[_n]_2⊗[_n]_3whose image has dimensionk(,μ,ν), and it will be a composition of commuting Hermitian projectors, each of which is either implementable via weak Fourier sampling or via generalized phase estimation. This then finishes the proof of Theorem <ref>. The overview of the maps is depicted in Figure <ref>. On[_n]_1⊗[_n]_2⊗[_n]_3we have an action of(_n)^6: on each of the three factors we have an action of(_n)^2, and the three actions of(_n)^2commute. §.§ First step: Projection to the isotypic component The following three Hermitian projectors commute and hence can be composed into one Hermitian projector: Π_1^⊗_2 ⊗_3 : [_n]_1⊗[_n]_2⊗[_n]_3 ⇉ [_n]_1^,⊗[_n]_2⊗[_n]_3, _1 ⊗Π_2^μ⊗_3 : [_n]_1⊗[_n]_2⊗[_n]_3 ⇉ [_n]_1⊗[_n]_2^μ,μ⊗[_n]_3, _1 ⊗_2 ⊗Π_3^μ : [_n]_1⊗[_n]_2⊗[_n]_3 ⇉ [_n]_1⊗[_n]_2⊗[_n]_3^ν,ν. The composition of these three idempotents is calledΠ_1^⊗Π_2^μ⊗Π_3^ν. It projects[_n]_1⊗[_n]_2⊗[_n]_3to the(,,μ,μ,ν,ν)-isotypic component([_n]_1⊗[_n]_2⊗[_n]_3)^,,μ,μ,ν,ν. Recall that ([_n]_1⊗[_n]_2⊗[_n]_3)^,,μ,μ,ν,ν = ([_n]_1⊗[_n]_2⊗[_n]_3)^,∗,μ,∗,ν,∗ = ([_n]_1⊗[_n]_2⊗[_n]_3)^∗,,∗,μ,∗,ν Note thatΠ_1^⊗Π_2^μ⊗Π_3^νcan be implemented by a quantum circuit using weak Fourier sampling three times <cit.>. §.§ Second step: Projection to the global invariants from the left We embed_n↪(_n)^6viaπ↦(π,,π,,π,). We defineP_^_nto be the projector to the_n-invariant space (calledQin <cit.>). Note thatP_^_nandΠ_1^⊗Π_2^μ⊗Π_3^νcommute, becauseΠ_1^⊗Π_2^μ⊗Π_3^νcan be defined in terms of the right actions, i.e., the unused three copies of_n:(Π_1^⊗Π_2^μ⊗Π_3^ν)(V) = (V)^∗,,∗,μ,∗,ν. We conclude thatP_^_n ∘Π_1^⊗Π_2^μ⊗Π_3^νis idempotent. §.§ Third step: Refinement from the right For an_n-representationVand a partition⊢nwe defineP^_:V→V^_to be the projection to the Young subgroup invariant space of the action of_⊆_n. Note that the elements from different factors__iof_commute. HenceP^_can be written as the composition of commuting projectors to invariant spaces: P^_ = P^__1∘ P^__2∘⋯ Each factor can be implemented using generalized phase estimation. On[_n]_1⊗[_n]_2⊗[_n]_3we have three of these projectors:P__1^_,P__2^_, andP__3^_, using copies2,4, and6of(_n)^6. Since they use different copies of_n, these three projectors commute, and we call their compositionP__1^_ ⊗P__2^_ ⊗P__3^_. ClearlyP__1^_ ⊗P__2^_ ⊗P__3^_commutes withP_^_n, because they use disjoint copies of_n. A crucial insight is that the projectorΠ_1^⊗Π_2^μ⊗Π_3^νcan also be expressed in terms of the left actions:(Π_1^⊗Π_2^μ⊗Π_3^ν)(V) = V^,∗,μ,∗,ν,∗. Hence,Π_1^⊗Π_2^μ⊗Π_3^νandP__1^_ ⊗P__2^_ ⊗P__3^_commute. We conclude thatP__1^_ ⊗P__2^_ ⊗P__3^_ ∘P_^_n ∘Π_1^⊗Π_2^μ⊗Π_3^νis a Hermitian projector. §.§ Kronecker dimension To prove Theorem <ref> it remains to prove that(P__1^_ ⊗P__2^_ ⊗P__3^_ ∘P_^_n ∘Π_1^⊗Π_2^μ⊗Π_3^ν) = k(,μ,ν). This proof is illustrated in Figure <ref>. It is instructive to think about the spaces in their Fourier basis, i.e.,[_n]_1⊗[_n]_2⊗[_n]_3 ≃⊕_⊢n, μ⊢n, ν⊢n ( []__1⊗[]__1⊗[μ]__2⊗[μ]__2 ⊗[ν]__3⊗[ν]__3 ) . The first projectorΠ_1^⊗Π_2^μ⊗Π_3^νmaps onto the(,,μ,μ,ν,ν)-isotypic component, which is[]__1⊗[]__1⊗[μ]__2⊗[μ]__2 ⊗[ν]__3⊗[ν]__3. The second projectorP_^_nmaps this space to the left invariant space(([]__1 ⊗[μ]__2 ⊗[ν]__3)^_n ⊗[]__1 ⊗[μ]__2 ⊗[ν]__3. The third projectorP__1^_ ⊗P__2^_μ ⊗P__3^_νmaps each right Specht module to a one dimensional space, according to Lemma <ref>:([]__1 ⊗[μ]__2 ⊗[ν]__3)^_n ⊗([]__1)^_⊗([μ]__2)^_μ ⊗([ν]__3)^_ν. By (<ref>), the dimension of this last vector space isk(,μ,ν). § PROOF OF THEOREM <REF>: AN ANALOGOUS CONSTRUCTION We embedφ:_d ↪_mdviaφ(σ)(ad-d+b) := σ(a)d-d+bfor all1≤a ≤d,1 ≤b ≤m. Consider the partition(m^d) := (m,m,…,m)ofmd, and consider the Young subgroup_(m^d) ≃_m ×_m ×⋯×_m ⊂_md. The wreath product_m≀_d ⊂_mdis the subgroup generated by_(m^d)and the image ofφ. For an_md-representationVthe invariant spaceV^_(m^d)carries the action of_dviaφ, and its invariant space(V^_(m^d))^_dis exactly the invariant spaceV^_d≀_m. For a(V)-representationW, the symmetric power^d Wis again a(V)-representation, isomorphic to the invariant space(^d W)^_d. The plethysm coefficienta_(d,m)is the multiplicity of the irreducible(V)-representation of typein^d(^m V). This can also be expressed in terms of the symmetric group:a_(d,m)is the dimension of the wreath product invariant space[]^_m≀_d. This can readily be seen from Schur-Weyl duality <cit.> ^dmV ≃ ⊕_ S_(V) ⊗ [] ⟹ ^d(^m V) ≃ ((^dmV)^_(m^d))^_d≃ (^dmV)^_m≀_d ≃ ⊕_ S_(V) ⊗ []^_m≀_d The proof of Theorem <ref> is analogous to the proof of Theorem <ref> with some minor adjustments. The commuting projectors are as follows: [_n] ≃ ⊕_⊢ n []_⊗[]_ Π^⇉ []_⊗[]_ P_^_m≀_d⇉ []_^_m≀_d⊗[ν]_ P_^_⇉ []_^_m≀_d⊗[ν]_^_ The projectors commute for the same reason as in the proof of Theorem <ref>, and (P_^_∘ P_^_m≀_d∘Π^) = []^_m≀_d = a_(d,m).P_^_m≀_dcan be implemented via two commuting projectors:P_^_m≀_d = P_^_(d)∘P_^_(m^d), where forP_^_(d)we use the action viaφ. This proves Theorem <ref>. Acknowledgements: We would like to thank Michael Walter for helpful comments on our draft.
http://arxiv.org/abs/2307.01480v1
20230704051639
Expanding Scanning Frequency Range of Josephson Parametric Amplifier Axion Haloscope Readout with Schottky Diode Bias Circuit
[ "Minsu Ko", "Sergey V. Uchaikin", "Boris I. Ivanov", "JinMyeong Kim", "Seonjeong Oh", "Violeta Gkika", "Yannis K. Semertzidis" ]
hep-ex
[ "hep-ex" ]
1,2]Minsu Ko 2]Sergey V. Uchaikin 2]Boris I. Ivanov 1,2]JinMyeong Kim 2]Seonjeong Oh 2]Violeta Gkika 1,2]Yannis K. Semertzidis [1]Korea Advanced Institute of Science and Technology, Daejeon, South Korea [2]Center for Axion and Precision Physics Research of Institute for Basic Science, Daejeon, South Korea Expanding Scanning Frequency Range of Josephson Parametric Amplifier Axion Haloscope Readout with Schottky Diode Bias Circuit [ August 1, 2023 ============================================================================================================================== The axion search experiments in the microwave frequency range require high sensitive detectors with intrinsic noise close to quantum noise limit. Josephson parametric amplifiers (JPAs) are the most valuable candidates for the role of the first stage amplifier in the measurement circuit of the microwave frequency range, as they are well-known in superconducting quantum circuits readout. To increase the frequency range, a challenging scientific task involves implementing an assembly with parallel connection of several single JPAs, which requires matching the complex RF circuit at microwaves and ensuring proper DC flux bias. In this publication, we present a new DC flux bias setup based on a Schottky diode circuit for a JPA assembly consisting of two JPAs. We provide a detailed characterization of the diodes at cryogenic temperatures lower than 4 . Specifically, we selected two RF Schottky diodes with desirable characteristics for the DC flux bias setup, and our results demonstrate that the Schottky diode circuit is a promising method for achieving proper DC flux bias in JPA assemblies. § INTRODUCTION Axion search experiments are fundamental physical research dedicated to resolve the strong charge parity (CP) problem and describe the nature of dark matter<cit.>. The haloscope experiments, which use high-quality 3D cavities at high magnetic fields<cit.>, are performed in the microwave frequency range<cit.> at the Center for Axion and Precision Physics Research (CAPP). The total system noise in the experiments is close to the quantum noise limit<cit.>. This achieves the sensitivity of the Dine-Fischler-Srednicki-Zhitnitsky axion-to-microwave photon coupling model<cit.>. The key element of such a measurement chain is a JPA. The JPAs used for axion search experiments at CAPP, at frequencies around 1 GHz, have a tunable range from 30  to 40 , with a gain of more than 20 dB<cit.>. The haloscope axion search experiments with a high magnetic field of 12  are performed using a dilution refrigerator inserted into the liquid He dewar. Such experiments are long-term, high-cost, and require scanning in a wide frequency range within a single cooldown of the dilution refrigerator (DR), with the experiment temperature less than 30 <cit.>. This defines the requirements for the microwave detection chain to scan in a wide frequency range. In our particular case, introducing more DC lines into the dilution refrigerator with a 1 K pot cooled by liquid helium flow increases the overall heat load and reduced the efficiency of the mixture condensing. A scheme with a connection of several JPAs with a single flux bias coil was designed, implemented, and applied in the axion search experiment performed at CAPP. Using this scheme, the scanning frequency range increased up to 120  <cit.>. In order to implement a multiple JPA connection scheme, to simplify frequency adjustment and avoid interference between different JPA, it is often necessary to separate the flux bias for each JPA using different twisted pairs of wires. However, adding more wires to an experimental fridge can be challenging due to limited space and a limited number of fridge connectors. To address this issue, we developed and implemented a new circuit design using two Schottky diodes. This circuit enabled us to apply two dc flux biases independently to two JPAs using the same twisted pair of wires, by separating the flux biases using currents of different directions to bias one or the other JPA. The circuit incorporated a diode rectifier that operated at the cold stage of the fridge. We tested the diodes at both 300  and 4  to obtain their current-voltage characteristics (I–V curves). § THEORY The I–V curve of diodes is governed by simplified Shockley diode equation<cit.>: I_D = I_S(e^V_D/nV_T-1), here, I_D and I_S are the diode current and reverse-bias saturation current, respectively. n is the ideality factor, V_D is the voltage across the diode, and V_T is the thermal voltage, which is given by q_eV_T=k_BT. Where, the q_e is the elementary charge, k_B is the Boltzmann constant. The equation describes the behavior in the forward-bias region and some part of the reverse-bias region. As shown in the Shockley equation, the IV behavior of a diode depends on temperature. As temperature increases, the threshold voltage decreases<cit.> since the high temperature helps to excite the carriers and the probability of overcoming the barrier increases. On the other hand, the operation of each particular type of Schottky diodes at cryogenic temperatures necessitates a specialized investigation. The JPAs used in our experiments are designed based on aluminum technology and are operated at temperatures below 50 , where the cooling power of a dilution fridge is insufficient to handle the power dissipation of the diodes. Therefore, the diode circuit should be mounted on the still, 1 , or 4  stage of the fridge. The properties of Schottky diodes at cryogenic temperatures can change dramatically, necessitating a preliminary experimental study. It has been demonstrated that the Schottky barrier allows the diode to respond quickly to current switching and operate correctly at low temperatures <cit.>. Previous researches have tested Schottky diodes at temperatures and characterized their properties including I–V behavior and hole diffusion length near 100  <cit.>. One of the results indicated a satisfactory level of concordance between the simulations based on theories and measurements, particularly at least in the vicinity of 100  <cit.>. More recently, the characteristics of Schottky diodes have been examined in extreme environments, including strong magnetic field <cit.> and cryogenic temperatures <cit.>. One of them revealed the expected behavior down to 77  with an ideality factor close to one. It is shown that the magneto-resistance of the diode slowly drops with increasing of the magnetic field up to 6  at both 4.2  and 77 . The objective of this experiment is to design a circuit utilizing RF Schottky diodes that can regulate the current direction in the DC flux bias circuit of JPAs operating at temperatures of 4  and below. § EXPERIMENTAL SETUP The SMS7630-040LF radio frequency (RF) Schottky diodes were experimentally studied at 300  and about 3 . The Keithley 2601B precision source measure unit and the Lakeshore 325 cryogenic temperature controller were used in order to measure I–V curves and LakeShore Cernox temperature sensor factory calibrated down to 1 . The measurements were performed in the dry closed cycle cryocooler based on a two-stage pulse tube and Cryomech compressor. The experimental setup is shown in Figure <ref>. In order to mount the diodes to the cryostat and perform the measurements, the samples were soldered to the PCB and placed at the 4  stage of the cryocooler. The additional thermal anchoring for the samples was provided by copper braided wire, shown as Cu thermal anchoring in Figure <ref>. In order to improve the connection between the diodes and the copper braided wire, we placed additional copper sheet. This copper sheet is shown as Cu film in Figure <ref>. The DC connection starts from the room temperature Fischer connector at 300 , which goes to the 24pin connector at 50  stage. From the 50  stage to the 4  stage, the cabling was done using twisted pair brass cables in CuNi shields, shown as cable1 to cable4 in Figure <ref>. The shields of the cables were thermally anchored at each temperature stage. § EXPERIMENTAL STUDY The measurement procedure involved sweeping the DC current over the diode and measuring the resulting voltage. The 4-point measurements were performed on two diodes at a stable cryogenic temperature of 3.05 . The diode measurements were conducted over a range of -200  to 200 , which corresponds to the bias current range of the JPAs used in CAPP axion search experiments. The measured I–V curves of the first Schottky diode at 300  and 3.05  are shown in Figure <ref>. It was observed that the Schottky diode exhibited higher forward voltage drop and breakdown voltage at cryogenic temperatures than at room temperature. The resistance of the diode at two different temperatures was obtained and is shown in Figure <ref>. Based on our measurements, the tested diodes are capable of serving as rectifiers throughout the entire temperature range between 3 and 300 . To quantitatively compare the rectification performance at room temperature and 3.05 , we estimated the parallel rectification factor ρ_p. The simplified schematic with two diodes connected in parallel and two JPAs is shown in Figure <ref>. The total current I_in is divided into I_1 and I_2 to provide an equivalent voltage to both diodes, where the resistance of each diode is denoted as R_for and R_rev, respectively. The dependence of R_for and R_rev on I_1 and I_2 is shown in Figure <ref>. The rectification factor of the diodes is represented by the parameter ρ_p, which would ideally equal 1, indicating full isolation with no current flowing over the reversed diode. ρ_p = I_1/I_in. The estimation of ρ_p is based on the assumption that the voltages across the two diodes are equivalent and that the circuit follows Kirchhoff's law, which is a valid assumption since both JPA flux bias coils are superconducting. The system of equation follows: I_1R_for(I_1) = I_2R_rev(I_2), I_in = I_1+I_2. To estimate of the ρ_p we need to obtain one of the currents I_1 or I_2. For the applied bias current I_in=200, the resistance values from Figure <ref> and solving the system of equations we obtained I_1 = 197.43 at 280  and I_1 = 198.54 at 3.05 . This yields ρ_p = 0.987 at 280  and ρ_p = 0.993 at 3.05 . It means that 99.3% of the total current flows through the forward bias diode, and only 0.7% of the total current is the leakage at 3.05 , while 1.3% of the total current is leakage at 280 . It shows that the RF Schottky diode has better performance at 3  than at room temperature. We performed measurements on several diodes for our JPA circuit, and the I–V curves of two of these diodes measured at 3.05  are shown in Figure<ref>. Both diodes exhibit desirable characteristics in forward bias, with no significant difference between their I–V curve behaviors. However, their breakdown voltages in reverse bias differ by about 1 , as one can see in Figure <ref>. The slight difference in breakdown voltage between the two characterized diodes does not affect the operation of our JPA flux bias circuit since we use only the positive direction of the DC current and the reverse current leakage is around 1 % for 200 . Therefore, we proceeded with implementing the characterized diodes in the JPA flux bias circuit, and a simplified schematic of the circuit applied for the JPA bias line is shown in Figure <ref>. § CONCLUSION The RF Schottky diodes were thoroughly characterized at the temperature of 3.05  using the closed cycle cryocooler. The parallel rectification factor was defined and used to quantitatively compare the RF Schottky diodes at room and cryogenic temperatures, which showed that the diodes are capable of rectification down to 3 . These characterized diodes are currently being used in the JPA flux bias circuit for the CAPP-MAX axion search experiment, which is based on a 12  magnet and carried out at the Center for Axion and Precision Physics Research of Institute for Basic Science. The RF diodes circuit shown in the current publication allowed us to implement a multiple JPA connection scheme and simplified frequency adjustment, which is necessary for axion search scanning in a wide frequency range. This scheme allows to avoid interference between different JPAs by means of the flux bias separation and to reduce the number of DC lines introduced to the dilution refrigerator. This causes the reducing of the total heat load to the refrigerator. The implemented circuit enabled us to apply two DC flux biases to two JPAs independently using the same twisted pair cabling. In particular, it was done by separating the flux biases using different directions of DC currents for one or the other JPA. As a further application we consider to install the proposed diode circuit to the axion search readout based on the 6 JPAs. § ACKNOWLEDGEMENT This work is supported by the Institute for Basic Science IBS-R017-D1. plain
http://arxiv.org/abs/2307.00970v1
20230703124130
Maximally entangled real states and SLOCC invariants: the 3-qutrit case
[ "Hamza Jaffali", "Frédéric Holweck", "Luke Oeding" ]
quant-ph
[ "quant-ph", "math.AG", "math.RT", "81P42, 81P45, 14N07, 15A69, 15A72" ]
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions Lina Bariah, Hikmet Sari, and Mérouane Debbah =================================================================================== The absolute values of polynomial SLOCC invariants (which always vanish on separable states) can be seen as measures of entanglement. We study the case of real 3-qutrit systems and discover a new set of maximally entangled states (from the point of view of maximizing the hyperdeterminant). We also study the basic fundamental invariants and find real 3-qutrit states that maximize their absolute values. It is notable that the Aharonov state is a simultaneous maximizer for all 3 fundamental invariants. We also study the evaluation of these invariants on random real 3-qutrit systems and analyze their behavior using histograms and level-set plots. Finally, we show how to evaluate these invariants on any 3-qutrit state using basic matrix operations. § INTRODUCTION Quantum entanglement is a fundamental concept in quantum physics that is responsible for the correlation between two or more parties of a multipartite quantum system, independently of the distance separating those components. With no classical equivalent, it is a key resource in Quantum Computing and Quantum Communications and is believed to be one of the main reasons for their speed up and advantage. Being able to classify and quantify quantum entanglement is thus crucial to understand and develop more powerful protocols <cit.>. Qutrit systems, which involve three-level quantum systems, have garnered attention for their potential to enhance the functionality of quantum computing <cit.>. While entangling qutrits is more challenging than entangling qubits, qutrits offer a richer set of possibilities for entanglement and allow for more complex operations Jaffali2016,Goss2022. This makes them promising candidates for a range of quantum applications, including quantum error correction <cit.> and quantum simulation <cit.>. On the other hand, the study of multipartite maximally (or highly) entangled states is relevant in the context of Quantum Information Processing and Quantum Computing. Indeed, maximally entangled states are crucial in the implementation of quantum networks <cit.>, in the field of MBQC (measurement-based quantum computer) <cit.>, in quantum error correcting codes Gour2007,Raissi2018, and in quantum communication protocols Cleve1999,Helwig2013 to mention a few. For bipartite or multipartite systems (with two or three particles), the question of maximally entangled states has been studied from different angles Enriquez2016, aulbach2010maximally. The question becomes more difficult as the number of particles increases (already the four-qubit case is considerably more difficult than the three-qubit case). Depending on the measure we consider, we may obtain radically different states, and the notion of “maximally entangled” is not as straightforward as in the two or three-qubit case. In this study, we propose a new candidate from the perspective of algebraic measures of entanglement, more specifically, the absolute value of the hyperdeterminant of 3 × 3 × 3 tensors, representing 3-qutrit quantum systems. We also apply the same methods to the other fundamental SLOCC (Stochastic Local Operations with Classical Communication) invariants and find states that maximize them. The article is organized as follows. In <Ref>, we recall the main works concerning 3-qutrits maximally entangled states. In <Ref>, we briefly introduce the notion of the hyperdeterminant and the classification of 3-qutrit systems. In <Ref>, we present the new real 3-qutrit maximally entangled state candidate, and we prove that it maximizes the 3 × 3 × 3 hyperdeterminant. In <Ref> we also propose candidates that maximize the fundamental invariants, i.e., a set of invariant polynomials that can be used to express all algebraic invariants including the 3× 3× 3 hyperdeterminant. <Ref> discusses the distribution of random 3-qutrit quantum states with respect to the absolute value of the hyperdeterminants and fundamental invariants. In <Ref> we provide practical tools to evaluate the invariants described in this paper on any 3-qutrits quantum state. We conclude our work in <Ref> and provide directions for future works. §.§ Previous Works The question of maximally entangled for 3-qutrit systems has already been studied in the past. In 2016, Enriquez et al. proposed two random states |Ψ_1⟩ and |Ψ_2⟩, and one symmetric 3-qutrit state |D[3,(1,1,1)]⟩ for which the minimal decomposition entropy is the largest <cit.>: |Ψ_1⟩ = 0.193e^1.7i|000⟩ + 0.323e^−2.01i|001⟩ + 0.16e^−2.16i|002⟩ + 0.229e^−2.22i|010⟩ + 0.232e^−3.12i|011⟩ + 0.186e^−2.5i|012⟩ + 0.239e^−2.34i|020⟩ + 0.141e^−0.411i|021⟩ + 0.159e^−0.512i|022⟩ + 0.099e^1.54i|100⟩ + 0.144e^−2.43i|101⟩ + 0.148e^2.13i|102⟩ + 0.263e^−1.62i|110⟩ + 0.322e^0.475i|111⟩ + 0.216e^−1.95i|112⟩ + 0.068e^−1.39i|120⟩ + 0.030e^−2.89i|121⟩ + 0.171e^1.91i|122⟩ + 0.253e^−2.82i|200⟩ + 0.022e^−0.225i|201⟩ + 0.06e^−1.2i|202⟩ + 0.003e^2.64i|210⟩ + 0.133e^−1.52i|211⟩ + 0.202e^2.2i|212⟩ + 0.194e^1.08i|220⟩ + 0.207e^1.13i|221⟩ + 0.274e^−2.29i|222⟩ , |Ψ_2⟩ = 0.245e^0.074i|000⟩ + 0.024e^2.49i|001⟩ + 0.248e^1.66i|002⟩ + 0.069e^1.55i|010⟩ + 0.256e^0.114i|011⟩ + 0.118e^−2.88i|012⟩ + 0.313e^−1.24i|020⟩ + 0.076e^2.77i|021⟩ + 0.149e^0.208i|022⟩ + 0.208e^2.56i|100⟩ + 0.227e^−2.88i|101⟩ + 0.157e^2.27i|102⟩ + 0.072e^3.08i|110⟩ + 0.2e^−1.07i|111⟩ + 0.199e^−1.87i|112⟩ + 0.13e^−1.95i|120⟩ + 0.133e^1.5i|121⟩ + 0.218e^−1.68i|122⟩ + 0.244e^−1.84i|200⟩ + 0.191e^−3.05i|201⟩ + 0.049e^2.61i|202⟩ + 0.144e^1.22i|210⟩ + 0.226e^2.14i|211⟩ + 0.278e^−2.46i|212⟩ + 0.227e^0.773i|220⟩ + 0.186e^−2.11i|221⟩ + 0.218e^−1.52i|222⟩ , |D[3,(1,1,1)]⟩ = 1/√(6)( |012⟩ + |021⟩ + |102⟩ + |120⟩ + |201⟩ + |210⟩) . The same year, Hebenstreit et al. proposed a way to characterize maximally entangled sets for the generic 3-qutrit states (see <Ref>), and how one can reach them through LOCC (Local Operations and Classical Communication) <cit.>. Even if the SLOCC classification of three-qutrits is well mathematically understood (see <Ref>), deciding which type of qutrit entanglement is useful for quantum protocols is not straightforward. For instance, in <cit.> the violation of Bell-like inequalities is studied for different type of three qutrit entangled states including the generalization of the GHZ-state: |GHZ_333⟩=1/√(3)(|000⟩+|111⟩+|222⟩) , two generalizations of the |W⟩ state: [ |W⟩ = 1/√(3)(|100⟩+|010⟩+|001⟩),; |W_333⟩ = 1/√(6)(|100⟩+|200⟩+|010⟩+|020⟩+|001⟩+|002⟩) , ] as well as the Aharonov state or singlet state: |𝒜⟩=1/√(6)(|012⟩+|201⟩+|120⟩-|021⟩-|102⟩-|210⟩) . In this study, the |W⟩ state achieves higher violation than the |GHZ_333⟩ of Bell's inequality, unlike the three-qubit case. The maximum violation is obtained for a state that was found numerically to be: |ψ_3⟩=(3-2√(3))|000⟩+|011⟩+|101⟩+|110⟩/2√(6-3√(3)) . The conclusions of <cit.> are similar under the presence of noise, they also considered in their paper the following Dicke states: [ D^2 _3=1/√(15)(|200⟩+|020⟩+|002⟩+2(|110⟩+|101⟩+|011⟩)),; D^3 _3=1/√(10)(|012⟩+|021⟩+|102⟩+|120⟩+|201⟩+|210⟩+2|111⟩) . ] Let us also note that the Aharonov state |𝒜⟩ was shown to be the resource to solve the quantum byzantine agreement problem <cit.> as well as the liar detection problem <cit.>. We will see that the Aharonov state also maximizes the fundamental invariants for 3 qutrits. The literature on entanglement of pure qutrit quantum states is not restricted to the three-partite case and higher system have been discussed goyeneche2014genuinely,Caves2000,Zha2020. § THE HYPERDETERMINANT The hyperdeterminant, a generalization of the determinant, is a homogeneous polynomial evaluated on tensors. Geometrically, the zero locus of the hyperdeterminant defines the equation of the dual variety X^*, i.e., the (closure) of the set of tangent hyperplanes of X, where X is the variety of separable states. The standard reference for hyperdeterminants is <cit.>, and one may also enjoy reading the introductions by Ottaviani Ottaviani_Hyperdeterminants, Ottaviani_5Lectures. Since the hyperdeterminant is invariant under SLOCC (Stochastic Local Operations and Classical Communication), it can be used to characterize entanglement in quantum systems <cit.>. For example, in the case of three-qubit systems, the hyperdeterminant can be used to differentiate between the two genuine entangled states |GHZ⟩ and |W⟩ <cit.>. In addition to giving qualitative information about the entanglement of a quantum state, the hyperdeterminant can also be seen as a quantitative measure of entanglement Jaffali2019, de2021mermin. The study of hyperdeterminants has led to the development of new techniques for characterizing entangled states and understanding their properties Miyake2003, Holweck2014, Holweck2014a, Jaffali2016, Jaffali2020, AlsinaThesis, csen2013hyperdeterminants. We recall that a general 3-qutrit state |ψ⟩∈_̋333 = ^3 ⊗^3 ⊗^3 can be expressed as: |ψ⟩ = ∑_i,j,k=0^2 a_ijk|i⟩⊗|j⟩⊗|k⟩= ∑_i,j,k=0^2 a_ijk|ijk⟩ , with ∑_i,j,k=0^2 |a_ijk|^2 = 1 . In the case of three-qutrit systems the SLOCC group is G_SLOCC = _3() ×_3() ×_3() . Let Δ_333 denote the 3× 3× 3-hyperdeterminant, which is, up to scale, the unique nonzero irreducible polynomial (of degree 36) in the variables a_ijk such that, Δ_333(|Φ⟩) = 0 ⇔ ∃ a,b,c ∈^3 such that {⟨(a ⊗ b ⊗ z) ||Φ⟩ = 0, ⟨(a ⊗ y ⊗ c) ||Φ⟩ = 0, ⟨(x ⊗ b ⊗ c) ||Φ⟩ = 0 } for all x, y, z ∈^3 . In coordinates, we say that Δ_333 is the unique up to scale nonzero polynomial (of degree 36) that vanishes on all tensors that are SLOCC equivalent to ∑_I∈{0,1,2}^3 x_I|I⟩ with x_000 = x_001 = x_002 = x_010 = x_020 = x_100 = x_200 = 0. In <Ref>, we recall the classification of 3-qutrits systems under the action of SLOCC. In <Ref>, we recall the expression of the hyperdeterminant for a generic 3-qutrit state. §.§ Classification of SLOCC Orbits §.§.§ Orbit Classification Over In 2000, Nurmiev presented a classification of 3 × 3 ×3 arrays with entries in up to the action of the group SLOCC <cit.>. According to his classification, the orbits of _̋333 under the action of G_SLOCC consist of 5 families (4 depending on parameters, 1 parameter free). The normal forms corresponding to each family are also described in <cit.>. Crucial to Nurmiev's classification is the embedding of _̋333 into a graded Lie algebra 𝔢_6 ≃_3^× 3⊕_̋333⊕_̋333^* . As such, any tensor |ψ⟩ in _̋333 can be written in a unique way, as |ψ⟩ = |ψ_S⟩+|ψ_N⟩ with |ψ_S⟩ semi-simple, |ψ_N⟩ nilpotent part, and [|ψ_S⟩, |ψ_N⟩] = 0. Note, a tensor of size 3×3×3 is said to be semi-simple if its orbit under SLOCC is closed and said to be nilpotent if the closure of its orbit under SLOCC contains the zero tensor. Moreover, just as the determinant of a matrix only depends on its eigenvalues, it is known that the value of any continuous invariant on |ψ⟩ = |ψ_S⟩+|ψ_N⟩∈_̋333 only depends on |ψ_S⟩ <cit.>. The semi-simple tensors in the case of 3-qutrits are also called generic 3-qutrits states, as it is mentioned in <cit.>. Nurmiev has shown that any complex semi-simple tensor is SLOCC-equivalent to a state of the form |ψ_S⟩ = a |v_1⟩ + b |v_2⟩ + c |v_3⟩, with (a,b,c) ∈^3 , and |a|^2 + |b|^2 + |c|^2 = 1 , and |v_1⟩ = 1/√(3)( |000⟩ + |111⟩ + |222⟩), |v_2⟩ = 1/√(3)(|012⟩ + |120⟩ + |201⟩), |v_3⟩ = 1/√(3)(|021⟩ + |102⟩ + |210⟩) . Back to Nurmiev's classification, each family can be defined as a linear combination of the three vectors |v_1⟩, |v_2⟩ and |v_3⟩, plus a nilpotent part. The complex coefficients a, b and c associated with these vectors must satisfy a set of conditions, listed as follows: * first family F_1 : abc≠ 0, (a^3+b^3+c^3)^3 - (3abc)^3 ≠ 0, * second family F_2 : b(a^3+b^3) ≠ 0, c=0, * third family F_3 : a ≠ 0, b=c=0, * fourth family F_4 : c=-b ≠ 0, a=0. The tensors of the first family correspond to semi-simple tensors and thus have no nilpotent part. We will be especially interested in this family since it is the only one that does not annihilate the hyperdeterminant. The fifth family of Nurmiev's classification, which does not depend on any parameter, is called the nilpotent cone, and only contains nilpotent orbits (no semi-simple part). The variety of nilpotent states coincides with the nilpotent cone, which is the variety where all invariants vanish Mumford1994,Bremner2013,Bremner2014. §.§.§ Real Semi-simple Elements In <cit.>, Di Trani et al., proved that the classification of semi-simple elements for real three-qutrit states is like Nurmiev's original proof. Each semi-simple real three-qutrit state belongs to one of the five families. Only the first family has a chance for the hyperdeterminant to not vanish and in <cit.> it was shown that all real orbits of the first family have a representative with a,b,c real. In other words, the semi-simple part of any real three-qutrit state that does not vanish the hyperdeterminant is SLOCC equivalent to one the form |ψ_S⟩=av_1+bv_2+cv_3 with a,b,c∈, and a^2+b^2+c^2=1 . According to <cit.> the real semisimple elements in the families 2 and 3 split into subcases (and up to permutation), of which we recall their normalized versions: * F_2': Set |w_1⟩ = √(2/21)(-|212⟩ + |200⟩ + 2|120⟩ - 2|111⟩ + 1/2|022⟩ - 1/2|001⟩) , |w_2⟩ = √(2/21)(-|222⟩- |201⟩+2|121⟩+ 2|110⟩-1/2|012⟩ - 1/2|000⟩) . Then real representatives of this type are of the form |ψ_ss,2⟩ = a_1|w_1⟩ + a_2|w_2⟩ , for a_1,a_2∈ with a_1^2+a_2^2= 1, and a_2(a_2^2 - 3 a_1^2) ≠ 0. * F_3': Set |w⟩ = 5√(41)/8(-2|001⟩ - 2|010⟩ - 2|100⟩+ 2|111⟩-1/8|222⟩) . Then the elements of this family have the form aw for a∈. To have a state (with norm 1) we require that a^2 = 1, so the real states have a=± 1. |ψ_ss,3⟩ = ±5√(41)/8(-2|001⟩ - 2|010⟩ - 2|100⟩+ 2|111⟩-1/8|222⟩) . §.§ Expressing the Hyperdeterminant It is known that Schläfli's method computes Δ_333, but this is one of the few cases for which such a straightforward method exists <cit.>. On the other hand, in this case the invariant ring for G_SLOCC is finitely generated by three fundamental invariants I_6, I_9, and I_12 (respectively of degrees 6, 9 and 12), whose expressions are found in Briand2004, Bremner2013, and we recall these in <Ref>. In 2014, Bremner, Hu, and Oeding <cit.> provided an expression of the hyperdeterminant Δ_333 as a polynomial in the fundamental invariants: Δ_333 = I_6^3 I_9^2 - I_6^2 I_12^2 + 36 I_6I_9^2 I_12 + 108 I_9^4 - 32 I_12^3 . The restriction of <Ref> to normalized states |ψ_S⟩ from <Ref> is [ Δ_333(|ψ_S⟩) = -4/3^18 a^3b^3c^3( a+b+c ) ^3×; ( a^2+2 ab-ac+b^2-bc+c^2) ^3( a^2-ab+2 ac+b^2-bc+c^2) ^3×; ( a^2-ab-ac+b^2+2 bc+c^2) ^3( a^2-ab-ac+b^2-bc+c^2)^3 . ] § NEW MAXIMALLY ENTANGLED 3-QUTRIT STATES In this section, we present a set of real 3-qutrit states that are maximally entangled from the perspective of the hyperdeterminant. We determine the maximum of the absolute value of the hyperdeterminant for 3-qutrit real states by approximate numerical methods, and then establish an exact and simplified expression of the real state maximizing it via exact symbolic methods. We first optimize the absolute value of the hyperdeterminant, using numerical methods (gradient-based, meta-heuristic (random walk, particle swarm optimization)), for both real and complex states. We obtained the same assumed maximum, of about 6.907059 × 10^-13. For example, here are two states |ψ_S_1⟩ and |ψ_S_2⟩, reaching the assumed maximal numerical value of the hyperdeterminant, expressed as: |ψ_S_1⟩ : (-0.4597089177, 0.6279551660, 0.6279649847) , |ψ_S_2⟩ : (0.4187234964+0.1897453668 i, -0.5719715278-0.2591902230 i, -0.5719688559-0.2592064016 i) . We studied the real critical points of the hyperdeterminant using the software and , and repeated the computation in <cit.> to provide an exact expression of the maximum as well as completely enumerate the maximizers: The global maximum of the absolute value of the hyperdeterminant |Δ_333|, when restricted to real states is √(3)/2^19× 3^14. The global max is reached at 12 semi-simple points a|v_1⟩ + b|v_2⟩ + c|v_3⟩ with the following values and their permutations: (a,b,c) = (rs,s,s) , with r = (1±√(3)) and s=±√(1/(r^2 + 2)) . Let f=Δ_333 be the 3×3×3 hyperdeterminant, (see <Ref>). Let a,b,c ∈ such that a^2 + b^2 + c^2 = 1. We aim to determine all critical points of the hyperdeterminant, i.e., points where the gradient of f is zero. We can rewrite c=±√(1-a^2-b^2), in order to simplify the study of f to the two variables a and b. We distinguish three different cases: Case 1: c=0 or c=1 The factors a^3, b^3 and c^3 appear in the expression of f, and therefore if c=0, f(a,b,c)=f(a,b,0)=0. If c=1, then a=b=0, and we also have f(a,b,c)=f(0,0,c)=0. In that case, |f| reaches the minimum. Case 2: c=√(1-a^2-b^2) If we substitute the expression of c in f, we retrieve: f(a,b) = -4/3^18 a^3b^3( -a^2-b^2+1 ) ^3/2( a+b+√(-a^2-b^2+1)) ^3 ( 2 ab-a√(-a^2-b^2+1)-b√(-a^2-b^2+1)+ 1 ) ^3 ( -ab+2 a√(-a^2-b^2+1)-b√(-a^2-b^2+1)+1 ) ^3 ( -ab-a√(-a^2-b^2+1)+2 b√(-a^2-b^2+1)+1 ) ^3 ( -ab-a√(-a^2-b^2+1)-b√(-a^2-b^2+1)+1 ) ^3 . If we differentiate f in both variables and we solve the system of equation {∂ f(a,b)/∂ a= 0, ∂ f(a,b)/∂ b= 0} over the real numbers. We use symbolic calculus for solving these equations, namely the function . We obtain 40 different solutions, on which we evaluate f(a,b). Some of the solutions are points in ^2, and some depend on b. For the points we evaluate the function and take the absolute value of the result. For the solutions depending on b, we substitute them in the expression of the hyperdeterminant to retrieve an expression f(b) only depending on one variable. We again differentiate this expression and search for critical points. We obtain new points, on which we also evaluate the hyperdeterminant. We regroup all values and look for the maximal one, in absolute value. In this case, the maximum value is exactly √(3)/2^193^14. Case 3: c=-√(1-a^2-b^2). This case is entirely analogous to Case 2, and the results turn out to be identical. Hence the global maximum of |Δ_333| is exactly equal to √(3)/2^193^14. Moreover, the maximum values occur at the points listed in the statement of the theorem. Second proof and enumeration of maximizers: We also ran the same procedure as outlined in the proof of <Ref> to algebraically compute the set of critical points. Set q = a^2 + b^2 + c^2 -1, the equation of the sphere over the real numbers. We found 60 critical points on 0-dimensional components defined by the system of equations {∇Δ_333, q }, 36 of which were real solutions. Among the real solutions, the maximum values of |Δ_333| are obtained at 3 ideals (symmetric up to permuting a,b,c), one of which is: ⟨ b-c, a^2-2 a c-2 c^2, a^2 + b^2 + c^2 -1⟩, and which has the following 4 solutions: (a,b,c) = (rs,s,s) , with r = (1±√(3)) and s=±√(1/(r^2 + 2)). The approximations of the coordinates of the solutions in this case are: [ (.459701, -.627963, -.627963) , (.888074, .325058, .325058) ,; (-.459701, .627963, .627963) , (-.888074, -.325058, -.325058) . ] Permuting a,b,c we obtain the 12 states that attain global max of |Δ_3,3,3|. Finally, we note that the other types of semi-simple elements (<ref>) and (<ref>) vanish identically on Δ_333, so cannot be global maxima. We conjecture that the maximum value of the hyperdeterminant for real three-qutrit states is in fact also the maximum value on complex values. Indeed, we ran several heuristic optimization methods over the complex for a generic semi-simple element |ψ_S⟩=av_1+bv_2+cv_2 with (a,b,c)∈^3 and |a|^2+|b|^2+|c|^2=1. All our methods returned the same maxima. Similarly, perturbing over the complex numbers the extremum found over the real does not provide any better solution. § MAXIMIZING THE FUNDAMENTAL INVARIANTS For this section, we will use the following expressions of the fundamental invariants (see <cit.>) evaluated on a general semi-simple normalized state |ψ_S⟩ as expressed in <Ref>: I_6(ψ_S) = 1/27( a^6-10a^3b^3-10a^3c^3+b^6-10b^3c^3+c^6 ) , I_9(ψ_S) = -√(3)/243(a-b) (a-c) (b-c) (a^2 + ab+ b^2) (a^2 + ac + c^2) (b^2 + bc + c^2) , I_12(ψ_S) = 1/729([ a^9b^3+a^3b^9+a^9c^3+b^9c^3+a^3c^9+b^3c^9; -4 (a^6b^6+a^6c^6+b^6c^6) +2 (a^6b^3c^3+a^3b^6c^3+a^3b^3c^6) ]) . We note that the basic invariant in degree 12 is only defined up to a multiple of I_6^2, but one might argue that this version of I_12 is natural because it does not contain the monomial a^12, for instance, and has small integer coefficients (up to a global rescaling). Let |v_1⟩, |v_2⟩ be the first two basic semi-simple states defined at (<ref>). Then, the global maximum of the absolute value of the fundamental invariants I_6, I_9 and I_12 restricted on generic 3-qutrits, with maximum values respectively 1/18 = .05555, √(6)/3888 = 0.0006300127939 and 1/7776 = 0.000128600823, is reached for the real 3-qutrit state |M_333,I⟩ = 1/√(2)( |v_1⟩ - |v_2⟩) . Moreover, all real semi-simple states that obtain the maximum are permutations of |M_333,I⟩. Here is an algebraic method for finding real critical points of a polynomial f(a,b,c) on the sphere {a^2 + b^2 + c^2 =1}, which we carried out in Macaulay2 (<cit.>) but could be done in any symbolic algebra system. Since the variables a,b,c are not independent on the sphere, we could solve for c and treat a,b as independent variables as was done above. This involves square roots, which can cause problems with some symbolic methods. Instead, we implicitly differentiate and compute the gradient as follows: First implicitly differentiate the equation a^2 + b^2 + c^2 =1 with respect to a and b treating a,b as independent and c = c(a,b) in order to solve for the values ∂ c/∂ a = -a/c and ∂ c/∂ b = -b/c . Then compute ∇ f(a,b,c) = (∂ f/∂ a + ∂ f/∂ c∂ c/∂ a, ∂ f/∂ b + ∂ f/∂ c∂ b/∂ a) . Set q = a^2 + b^2 + c^2 -1. Now we seek to solve the ideal {q, ∇ f}. We do this both symbolically and numerically. Symbolically we perform an irreducible decomposition of the ideal (the command in ) and obtain a finite list of ideals J_i that are irreducible over R = [a,b,c]. It is typical when working on a computer over an exact field like the rationals that some solutions will not exist over this field, which is why we also compute the numerical decomposition. Still, with a component of our ideal that is not solvable over we may still be able to evaluate |f| at the points 𝒱(J_i) by substituting f into each ring R/J_i to attempt to obtain a value for f at the points in the zero-set 𝒱(J_i). The result will either be a number, or an ideal (typically using fewer variables) that we can attempt to solve. As a caution, we are only interested in the values of f, so we do not consider the values of f at complex critical points (a,b,c). Even more, the polynomial q = a^2+b^2+c^2-1 only vanishes for real points (a,b,c) of norm 1, the other non-real complex roots will not have norm 1 and will not be solutions to our system of equations. A numerical irreducible decomposition may be computed via the command using the Numerical Algebraic Geometry package <cit.> in . This will produce a list L numerical solutions to the ideal {q,∇ f}. Then we can evaluate f at every point of L. We carried out this procedure in Macaulay2 for each invariant f ∈{I_6, I_9, I_12}, and compared the values of |f| at critical points. The maximal values occur precisely at the critical points listed in the statement of the theorem. We include this computation in the ancillary files accompanying the arXiv version of this article. Some of the critical points coincided with the states |D[3,(1,1,1)]⟩ at (<ref>) and |GHZ_333⟩, but they attain value |I_6(|D[3,(1,1,1)]⟩)|=|I_6(|GHZ_333⟩)|= 1/27, which is smaller than the global maximum. The global max is obtained for the Aharonov state |𝒜⟩ (<ref>). We also compared the values of |f| to the critical values found on the other real semi-simple elements (see <Ref>) for each respective invariant. We found that none of them were greater than the maxima we found on the generic semi-simple elements. Here is the argument in more detail. The general semi-simple element |ψ_S⟩ degenerates to each of the other families of semi-simple elements over the complex numbers. However, over the real numbers two additional orbits must be considered corresponding to the states (<ref>) and (<ref>), <cit.>. Using the methods in <Ref> we can also compute these invariants on the other families of real semi-simple elements, which we denote respectively by |ψ_ss,2⟩ and |ψ_ss,3⟩. Set normalization factors ζ_6 = 2^5/3^3·7^3, ζ_9 = 2^5√(42)/3^7· 5^7, and ζ_12 = 2^7/21^6. We calculated: [ I_6(|ψ_ss,2⟩) = ζ_6 (3a_1^6+15a_1^2a_2^4+2a_2^6); ≡ ζ_6 (16 a_1^6-24 a_1^4+9 a_1^2+2) ⟨ a_1^2+a_2^2-1⟩ ,; I_9(|ψ_ss,2⟩) = ζ_9 ( -a_1^9+6 a_1^5a_2^4+8 a_1^3a_2^6+3 a_1a_2^8); ≡ ζ_9 ( -4 a_1^3+3 a_1) ⟨ a_1^2+a_2^2-1⟩ ,; I_12(|ψ_ss,2⟩) = ζ_12 (-3 a_1^12-3 a_1^8a_2^4-40 a_1^6a_2^6-57 a_1^4a_2^8-24 a_1^2a_2^10- a_2^12); ≡ ζ_12 (-32 a_1^6+48 a_1^4-18 a_1^2-1) ⟨ a_1^2+a_2^2-1⟩,; Δ_333(|ψ_ss,2⟩) = 0 . ] Note the second case for each invariant we have simplified the expression and replaced all instances a_2 via a_1^2+a_2^2=1, which does not involve any square roots since each invariant has even degree in a_2. Since the resulting reduced expressions only involve a_1 we can use basic 1-variable optimization from calculus. Here is a summary of the critical points and values: [ (0,1) (±1/2, ±√(3/4)), (±√(3/4), ±1/2) max|I_d|; I_6: 2·ζ_6 3·ζ_6 2·ζ_6 3·ζ_6 = .0103660511823777; I_9: ± 1 ·ζ_9 ζ_9 =.00000121376835394049; I_12: -1·ζ_12 -3·ζ_12 -1·ζ_12 3ζ_12 = .00000447729237981977; ] Note that all the absolute values of these invariants take their maximum value at the same points, (a_1,a_2) = (±1/2, ±√(3/4)), however, these maxima are all smaller than the values for the other type of semi-simple elements. In the case of |ψ_ss,3⟩ only I_6 is non-zero, and since a^2 = 1 we have |I_6(|ψ_ss,3⟩)| = |-2^18/5^6· 41^3a^6| = 2^18/5^6· 41^3 = .000243426763976147 , which is smaller than the maximum value of 1/18. As mentioned already the space of degree 12 invariants is spanned by I_6^2 and I_12, so it doesn't make sense geometrically to choose this particular I_12 over any other degree 12 invariant, i.e. we can add in any multiple of I_6^2 and replace I_12 with I_12 + λ I_6. Then we could re-interpret the maximum value computation as follows. Maximizing a single invariant f is finding the unit vector x such that |f(x)| is greatest. Another interpretation of this, which generalizes, is to find the unit vector x that is furthest from the variety 𝒱(f) = {y | f(y) = 0}. So, we replace the question of maximizing I_12 by the question of finding the point(s) of maximal distance from the variety 𝒱(I_6, I_12), since this variety eliminates the ambiguity in the definition of I_12. However, working on semi-simple elements (defined by 3 real parameters a,b,c), and intersecting with the sphere a^2 + b^2 + c^2 = 1 we obtain a finite collection of pairs of antipodal points, and as such every point outside that set maximizes the sum of the distances to these points. § EVALUATING INVARIANTS ON RANDOM STATES In this section, we study the evaluation of the fundamental invariants, as well as the hyperdeterminant, on random real 3-qutrit states. We provide several visualizations of these evaluations and try to extract useful information concerning the distribution of the amount of entanglement (in the sense of the absolute value of invariants) in the space of real 3-qutrits. By the term “random” we mean we have used a random number generator with uniform distribution on the interval [-1,1]. Moreover, since we are only concerned with the semi-simple part (the nilpotent parts do not change the values of these invariants) we sample uniformly a, b and c in [-1,1], and then renormalize. For each invariant, we propose three types of plots. The first type is a histogram representing the distribution of values of the absolute value of the invariant. It provides information about which values of the invariant are most present in the space of real 3-qutrit states. The histograms of the hyperdeterminant Δ_333 and the invariants I_6, I_9 and I_12 (respectively <Ref>) present shapes that are different from the one random homogeneous polynomials of the same degree can provide. For this first plot, we use 500000 random real 3-qutrits as our input data. We note that the chances to get (very close to) a maximally entangled state based on the number of samples landing in the last bin in the histogram are as follows. For Δ_333, it is 0.3294 %. For I_6, it is 0.5802%. For I_9, it is 0.4122 %. For I_12, it is 0.2696 %. For maximizing all invariants at the same time, it is 0.3106 %. For the second type of plot, we follow the same approach as Alsina in his thesis <cit.>, by generating a significant number of random states, then ordering them in increasing order to obtain a graphical representation. In this second type of plot, we use 20000 random real 3-qutrits as our input data. See <Ref>. The last type of figure represents the evaluation of the invariant in a spherical plot, generated in . Because we restrict to semi-simple states with real coefficients, we can represent them in the unit sphere defined by a^2 + b^2 + c^2 = 1. Depending on the value of the invariant (we don't compute the absolute value here), the color of a given point indicates if we are close to a minimum/maximum or to zero. See <Ref>. §.§ Random States and Now we provide some observations and comments on the plots. In the histogram <Ref> and curve <Ref> one can see that most of the states have value of hyperdeterminant close to 0. In addition, it appears that the states close to the maximum value are the less probable (according to the histogram). Looking at the plot on the sphere <Ref> one can observe the symmetry of the hyperdeterminant, since the expression of the hyperdeterminant is invariant under permutations of a, b, c. We observe that the dominant color is the cyan, expressing the fact that most of the real random states have hyperdeterminant value close to 0, echoing what was noticed from the histogram. One can guess from the image that the midpoint between two red points is (most of the time) falling either into the cyan or the red color. Which might mean that the mean of two max states could annihilate the hyperdeterminant or give a another max (in the opposite sign). We saw this in the results from <Ref> that there are 12 points that realize the maximum, and their coordinates differ by permutations, and sign changes. It seems that the barycenter of three minimally entangled states (red, in the green/yellow zone) and maximally entangled states (red, in the purple blue zone) is the same: a root of the hyperdeterminant. §.§ Random States and Here are some observations regarding the plots in <Ref>. It appears that the values of the invariant have the same frequency (around 2700 in the histogram <Ref>), except around 0.02 and 0.04, where we can observe a kind of Gaussian with a high peak (above 14000). This feature is also seen from <Ref>, where the curve is a linear function until around 10000 then if almost constant between 10000 and 15000 and then become again a linear function from 15000 to 20000. We speculate that the high frequency of states having almost the same value of this invariant can tell us something about the size of some SLOCC orbits. In the sphere plot <Ref> we notice the symmetry in the plot due to the invariance under permuting a,b,c. We also notice some concurrences between roots of one invariant and peeks in the values of this invariant. §.§ Random States and The histogram <Ref> and curve <Ref> indicate that a large number of states have value close to zero, and there is another small peak in the histogram at around .000275, indicating another region where I_9 has more frequent values. In the sphere plot we also note the a,b,c permutation symmetry, and we notice the alternating nature (opposite signs on adjacent sides of the level curve I_9=0) due to the fact that the invariant is of odd degree. §.§ Random states and The histogram <Ref> and curve <Ref> indicate that there are several regions where the invariant I_12 has similar values. In the sphere plot we also note the a,b,c permutation symmetry. As in all the plots we notice that high values of the invariant are infrequent. §.§ Combination of Here we propose a way to combine the measurement of the fundamental invariants. We evaluate the quantity S_I, defined as the sum of absolute values of invariants divided by their respective maximum value S_I = |I_6|/m_I_6 + |I_9|/m_I_9 + |I_12|/m_I_12 , with m_I_6 = 1/18, m_I_9 = √(6)/3888 , m_I_12 = 1/7776 . For the Aharonov state |𝒜⟩, which maximizes all three fundamental invariants, we have: S_I(|𝒜⟩) = 1 + 1 + 1 = 3. The histogram <Ref> shows that large values for this invariant are infrequent, with the most frequent value being approximately 0.7. § EVALUATING INVARIANTS ON ANY STATE Until now in this article we have focused on evaluating invariants on the semi-simple part of a state. If one can compute the Jordan decomposition of a tensor, then the invariants have simple forms, and this is very useful for studying the properties of these invariants. On the other hand, if one is given a state and one does not know, or does not want to compute, its Jordan decomposition there is another way to evaluate the invariants at the given state. Here we explain how to use computations outlined in <cit.> to evaluate the basic invariants via simple matrix operations (power-traces and determinants), and then use the formula in <cit.> to evaluate the hyperdeterminant. Tensors in ^3 ⊗^3 ⊗^3 can be viewed in the simple Lie algebra 𝔢_6 which has grading 𝔢_6= (_3^⊕ 3) ⊕(^3 ⊗^3 ⊗^3) ⊕(^3 ⊗^3 ⊗^3)^* . As such, we can construct the standard matrix representatives of the adjoint operators _Z(Y) = [Z,Y] using the bracket product from the Lie algebra. We have implemented these computations, which we described in more generality in <cit.> in Macaulay2 <cit.> but recognizing that some readers may not be familiar with that system, we provide these matrices here and in a text file in the ancillary files to the arXiv version of this article. The matrix K= _Z(·) = [ 0 0 K_0,2; K_1,0 0 0; 0 K_2,1 0 ] has blocks: K_0,2= !([ -1/3z_2,2,2 1/3z_2,2,1 -1/3z_2,2,0 1/3z_2,1,2 -1/3z_2,1,1 1/3z_2,1,0 -1/3z_2,0,2 1/3z_2,0,1 -1/3z_2,0,0 1/3z_1,2,2 -1/3z_1,2,1 1/3z_1,2,0 -1/3z_1,1,2 1/3z_1,1,1 -1/3z_1,1,0 1/3z_1,0,2 -1/3z_1,0,1 1/3z_1,0,0 2/3z_0,2,2 -2/3z_0,2,1 2/3z_0,2,0 -2/3z_0,1,2 2/3z_0,1,1 -2/3z_0,1,0 2/3z_0,0,2 -2/3z_0,0,1 2/3z_0,0,0; -2/3z_2,2,2 2/3z_2,2,1 -2/3z_2,2,0 2/3z_2,1,2 -2/3z_2,1,1 2/3z_2,1,0 -2/3z_2,0,2 2/3z_2,0,1 -2/3z_2,0,0 -1/3z_1,2,2 1/3z_1,2,1 -1/3z_1,2,0 1/3z_1,1,2 -1/3z_1,1,1 1/3z_1,1,0 -1/3z_1,0,2 1/3z_1,0,1 -1/3z_1,0,0 1/3z_0,2,2 -1/3z_0,2,1 1/3z_0,2,0 -1/3z_0,1,2 1/3z_0,1,1 -1/3z_0,1,0 1/3z_0,0,2 -1/3z_0,0,1 1/3z_0,0,0; -1/3z_2,2,2 1/3z_2,2,1 -1/3z_2,2,0 1/3z_2,1,2 -1/3z_2,1,1 1/3z_2,1,0 2/3z_2,0,2 -2/3z_2,0,1 2/3z_2,0,0 1/3z_1,2,2 -1/3z_1,2,1 1/3z_1,2,0 -1/3z_1,1,2 1/3z_1,1,1 -1/3z_1,1,0 -2/3z_1,0,2 2/3z_1,0,1 -2/3z_1,0,0 -1/3z_0,2,2 1/3z_0,2,1 -1/3z_0,2,0 1/3z_0,1,2 -1/3z_0,1,1 1/3z_0,1,0 2/3z_0,0,2 -2/3z_0,0,1 2/3z_0,0,0; -2/3z_2,2,2 2/3z_2,2,1 -2/3z_2,2,0 -1/3z_2,1,2 1/3z_2,1,1 -1/3z_2,1,0 1/3z_2,0,2 -1/3z_2,0,1 1/3z_2,0,0 2/3z_1,2,2 -2/3z_1,2,1 2/3z_1,2,0 1/3z_1,1,2 -1/3z_1,1,1 1/3z_1,1,0 -1/3z_1,0,2 1/3z_1,0,1 -1/3z_1,0,0 -2/3z_0,2,2 2/3z_0,2,1 -2/3z_0,2,0 -1/3z_0,1,2 1/3z_0,1,1 -1/3z_0,1,0 1/3z_0,0,2 -1/3z_0,0,1 1/3z_0,0,0; -1/3z_2,2,2 1/3z_2,2,1 2/3z_2,2,0 1/3z_2,1,2 -1/3z_2,1,1 -2/3z_2,1,0 -1/3z_2,0,2 1/3z_2,0,1 2/3z_2,0,0 1/3z_1,2,2 -1/3z_1,2,1 -2/3z_1,2,0 -1/3z_1,1,2 1/3z_1,1,1 2/3z_1,1,0 1/3z_1,0,2 -1/3z_1,0,1 -2/3z_1,0,0 -1/3z_0,2,2 1/3z_0,2,1 2/3z_0,2,0 1/3z_0,1,2 -1/3z_0,1,1 -2/3z_0,1,0 -1/3z_0,0,2 1/3z_0,0,1 2/3z_0,0,0; -2/3z_2,2,2 -1/3z_2,2,1 1/3z_2,2,0 2/3z_2,1,2 1/3z_2,1,1 -1/3z_2,1,0 -2/3z_2,0,2 -1/3z_2,0,1 1/3z_2,0,0 2/3z_1,2,2 1/3z_1,2,1 -1/3z_1,2,0 -2/3z_1,1,2 -1/3z_1,1,1 1/3z_1,1,0 2/3z_1,0,2 1/3z_1,0,1 -1/3z_1,0,0 -2/3z_0,2,2 -1/3z_0,2,1 1/3z_0,2,0 2/3z_0,1,2 1/3z_0,1,1 -1/3z_0,1,0 -2/3z_0,0,2 -1/3z_0,0,1 1/3z_0,0,0; 0 0 0 0 0 0 0 0 0 -z_0,2,2 z_0,2,1 -z_0,2,0 z_0,1,2 -z_0,1,1 z_0,1,0 -z_0,0,2 z_0,0,1 -z_0,0,0 0 0 0 0 0 0 0 0 0; z_0,2,2 -z_0,2,1 z_0,2,0 -z_0,1,2 z_0,1,1 -z_0,1,0 z_0,0,2 -z_0,0,1 z_0,0,0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; z_1,2,2 -z_1,2,1 z_1,2,0 -z_1,1,2 z_1,1,1 -z_1,1,0 z_1,0,2 -z_1,0,1 z_1,0,0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 -z_2,0,2 z_2,0,1 -z_2,0,0 0 0 0 0 0 0 z_1,0,2 -z_1,0,1 z_1,0,0 0 0 0 0 0 0 -z_0,0,2 z_0,0,1 -z_0,0,0 0 0 0; z_2,0,2 -z_2,0,1 z_2,0,0 0 0 0 0 0 0 -z_1,0,2 z_1,0,1 -z_1,0,0 0 0 0 0 0 0 z_0,0,2 -z_0,0,1 z_0,0,0 0 0 0 0 0 0; z_2,1,2 -z_2,1,1 z_2,1,0 0 0 0 0 0 0 -z_1,1,2 z_1,1,1 -z_1,1,0 0 0 0 0 0 0 z_0,1,2 -z_0,1,1 z_0,1,0 0 0 0 0 0 0; 0 -z_2,2,0 0 0 z_2,1,0 0 0 -z_2,0,0 0 0 z_1,2,0 0 0 -z_1,1,0 0 0 z_1,0,0 0 0 -z_0,2,0 0 0 z_0,1,0 0 0 -z_0,0,0 0; z_2,2,0 0 0 -z_2,1,0 0 0 z_2,0,0 0 0 -z_1,2,0 0 0 z_1,1,0 0 0 -z_1,0,0 0 0 z_0,2,0 0 0 -z_0,1,0 0 0 z_0,0,0 0 0; z_2,2,1 0 0 -z_2,1,1 0 0 z_2,0,1 0 0 -z_1,2,1 0 0 z_1,1,1 0 0 -z_1,0,1 0 0 z_0,2,1 0 0 -z_0,1,1 0 0 z_0,0,1 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z_1,2,2 -z_1,2,1 z_1,2,0 -z_1,1,2 z_1,1,1 -z_1,1,0 z_1,0,2 -z_1,0,1 z_1,0,0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 z_2,2,2 -z_2,2,1 z_2,2,0 -z_2,1,2 z_2,1,1 -z_2,1,0 z_2,0,2 -z_2,0,1 z_2,0,0; 0 0 0 0 0 0 0 0 0 -z_2,2,2 z_2,2,1 -z_2,2,0 z_2,1,2 -z_2,1,1 z_2,1,0 -z_2,0,2 z_2,0,1 -z_2,0,0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 z_2,1,2 -z_2,1,1 z_2,1,0 0 0 0 0 0 0 -z_1,1,2 z_1,1,1 -z_1,1,0 0 0 0 0 0 0 z_0,1,2 -z_0,1,1 z_0,1,0; 0 0 0 0 0 0 z_2,2,2 -z_2,2,1 z_2,2,0 0 0 0 0 0 0 -z_1,2,2 z_1,2,1 -z_1,2,0 0 0 0 0 0 0 z_0,2,2 -z_0,2,1 z_0,2,0; 0 0 0 -z_2,2,2 z_2,2,1 -z_2,2,0 0 0 0 0 0 0 z_1,2,2 -z_1,2,1 z_1,2,0 0 0 0 0 0 0 -z_0,2,2 z_0,2,1 -z_0,2,0 0 0 0; 0 0 z_2,2,1 0 0 -z_2,1,1 0 0 z_2,0,1 0 0 -z_1,2,1 0 0 z_1,1,1 0 0 -z_1,0,1 0 0 z_0,2,1 0 0 -z_0,1,1 0 0 z_0,0,1; 0 0 z_2,2,2 0 0 -z_2,1,2 0 0 z_2,0,2 0 0 -z_1,2,2 0 0 z_1,1,2 0 0 -z_1,0,2 0 0 z_0,2,2 0 0 -z_0,1,2 0 0 z_0,0,2; 0 -z_2,2,2 0 0 z_2,1,2 0 0 -z_2,0,2 0 0 z_1,2,2 0 0 -z_1,1,2 0 0 z_1,0,2 0 0 -z_0,2,2 0 0 z_0,1,2 0 0 -z_0,0,2 0 ])   K_1,0= ! ([ -z_0,0,0 0 -z_0,0,0 0 -z_0,0,0 0 -z_1,0,0 -z_2,0,0 0 -z_0,1,0 -z_0,2,0 0 -z_0,0,1 -z_0,0,2 0 0 0 0 0 0 0 0 0 0; -z_0,0,1 0 -z_0,0,1 0 z_0,0,1 -z_0,0,1 -z_1,0,1 -z_2,0,1 0 -z_0,1,1 -z_0,2,1 0 0 0 -z_0,0,2 0 0 0 0 0 0 -z_0,0,0 0 0; -z_0,0,2 0 -z_0,0,2 0 0 z_0,0,2 -z_1,0,2 -z_2,0,2 0 -z_0,1,2 -z_0,2,2 0 0 0 0 0 0 0 0 0 0 0 -z_0,0,0 -z_0,0,1; -z_0,1,0 0 z_0,1,0 -z_0,1,0 -z_0,1,0 0 -z_1,1,0 -z_2,1,0 0 0 0 -z_0,2,0 -z_0,1,1 -z_0,1,2 0 0 0 0 -z_0,0,0 0 0 0 0 0; -z_0,1,1 0 z_0,1,1 -z_0,1,1 z_0,1,1 -z_0,1,1 -z_1,1,1 -z_2,1,1 0 0 0 -z_0,2,1 0 0 -z_0,1,2 0 0 0 -z_0,0,1 0 0 -z_0,1,0 0 0; -z_0,1,2 0 z_0,1,2 -z_0,1,2 0 z_0,1,2 -z_1,1,2 -z_2,1,2 0 0 0 -z_0,2,2 0 0 0 0 0 0 -z_0,0,2 0 0 0 -z_0,1,0 -z_0,1,1; -z_0,2,0 0 0 z_0,2,0 -z_0,2,0 0 -z_1,2,0 -z_2,2,0 0 0 0 0 -z_0,2,1 -z_0,2,2 0 0 0 0 0 -z_0,0,0 -z_0,1,0 0 0 0; -z_0,2,1 0 0 z_0,2,1 z_0,2,1 -z_0,2,1 -z_1,2,1 -z_2,2,1 0 0 0 0 0 0 -z_0,2,2 0 0 0 0 -z_0,0,1 -z_0,1,1 -z_0,2,0 0 0; -z_0,2,2 0 0 z_0,2,2 0 z_0,2,2 -z_1,2,2 -z_2,2,2 0 0 0 0 0 0 0 0 0 0 0 -z_0,0,2 -z_0,1,2 0 -z_0,2,0 -z_0,2,1; z_1,0,0 -z_1,0,0 -z_1,0,0 0 -z_1,0,0 0 0 0 -z_2,0,0 -z_1,1,0 -z_1,2,0 0 -z_1,0,1 -z_1,0,2 0 -z_0,0,0 0 0 0 0 0 0 0 0; z_1,0,1 -z_1,0,1 -z_1,0,1 0 z_1,0,1 -z_1,0,1 0 0 -z_2,0,1 -z_1,1,1 -z_1,2,1 0 0 0 -z_1,0,2 -z_0,0,1 0 0 0 0 0 -z_1,0,0 0 0; z_1,0,2 -z_1,0,2 -z_1,0,2 0 0 z_1,0,2 0 0 -z_2,0,2 -z_1,1,2 -z_1,2,2 0 0 0 0 -z_0,0,2 0 0 0 0 0 0 -z_1,0,0 -z_1,0,1; z_1,1,0 -z_1,1,0 z_1,1,0 -z_1,1,0 -z_1,1,0 0 0 0 -z_2,1,0 0 0 -z_1,2,0 -z_1,1,1 -z_1,1,2 0 -z_0,1,0 0 0 -z_1,0,0 0 0 0 0 0; z_1,1,1 -z_1,1,1 z_1,1,1 -z_1,1,1 z_1,1,1 -z_1,1,1 0 0 -z_2,1,1 0 0 -z_1,2,1 0 0 -z_1,1,2 -z_0,1,1 0 0 -z_1,0,1 0 0 -z_1,1,0 0 0; z_1,1,2 -z_1,1,2 z_1,1,2 -z_1,1,2 0 z_1,1,2 0 0 -z_2,1,2 0 0 -z_1,2,2 0 0 0 -z_0,1,2 0 0 -z_1,0,2 0 0 0 -z_1,1,0 -z_1,1,1; z_1,2,0 -z_1,2,0 0 z_1,2,0 -z_1,2,0 0 0 0 -z_2,2,0 0 0 0 -z_1,2,1 -z_1,2,2 0 -z_0,2,0 0 0 0 -z_1,0,0 -z_1,1,0 0 0 0; z_1,2,1 -z_1,2,1 0 z_1,2,1 z_1,2,1 -z_1,2,1 0 0 -z_2,2,1 0 0 0 0 0 -z_1,2,2 -z_0,2,1 0 0 0 -z_1,0,1 -z_1,1,1 -z_1,2,0 0 0; z_1,2,2 -z_1,2,2 0 z_1,2,2 0 z_1,2,2 0 0 -z_2,2,2 0 0 0 0 0 0 -z_0,2,2 0 0 0 -z_1,0,2 -z_1,1,2 0 -z_1,2,0 -z_1,2,1; 0 z_2,0,0 -z_2,0,0 0 -z_2,0,0 0 0 0 0 -z_2,1,0 -z_2,2,0 0 -z_2,0,1 -z_2,0,2 0 0 -z_0,0,0 -z_1,0,0 0 0 0 0 0 0; 0 z_2,0,1 -z_2,0,1 0 z_2,0,1 -z_2,0,1 0 0 0 -z_2,1,1 -z_2,2,1 0 0 0 -z_2,0,2 0 -z_0,0,1 -z_1,0,1 0 0 0 -z_2,0,0 0 0; 0 z_2,0,2 -z_2,0,2 0 0 z_2,0,2 0 0 0 -z_2,1,2 -z_2,2,2 0 0 0 0 0 -z_0,0,2 -z_1,0,2 0 0 0 0 -z_2,0,0 -z_2,0,1; 0 z_2,1,0 z_2,1,0 -z_2,1,0 -z_2,1,0 0 0 0 0 0 0 -z_2,2,0 -z_2,1,1 -z_2,1,2 0 0 -z_0,1,0 -z_1,1,0 -z_2,0,0 0 0 0 0 0; 0 z_2,1,1 z_2,1,1 -z_2,1,1 z_2,1,1 -z_2,1,1 0 0 0 0 0 -z_2,2,1 0 0 -z_2,1,2 0 -z_0,1,1 -z_1,1,1 -z_2,0,1 0 0 -z_2,1,0 0 0; 0 z_2,1,2 z_2,1,2 -z_2,1,2 0 z_2,1,2 0 0 0 0 0 -z_2,2,2 0 0 0 0 -z_0,1,2 -z_1,1,2 -z_2,0,2 0 0 0 -z_2,1,0 -z_2,1,1; 0 z_2,2,0 0 z_2,2,0 -z_2,2,0 0 0 0 0 0 0 0 -z_2,2,1 -z_2,2,2 0 0 -z_0,2,0 -z_1,2,0 0 -z_2,0,0 -z_2,1,0 0 0 0; 0 z_2,2,1 0 z_2,2,1 z_2,2,1 -z_2,2,1 0 0 0 0 0 0 0 0 -z_2,2,2 0 -z_0,2,1 -z_1,2,1 0 -z_2,0,1 -z_2,1,1 -z_2,2,0 0 0; 0 z_2,2,2 0 z_2,2,2 0 z_2,2,2 0 0 0 0 0 0 0 0 0 0 -z_0,2,2 -z_1,2,2 0 -z_2,0,2 -z_2,1,2 0 -z_2,2,0 -z_2,2,1 ])   K_2,1= !([ z_1,1,1 -z_1,1,0 0 -z_1,0,1 z_1,0,0 0 0 0 0 -z_0,1,1 z_0,1,0 0 z_0,0,1 -z_0,0,0 0 0 0 0 0 0 0 0 0 0 0 0 0; z_1,1,2 0 -z_1,1,0 -z_1,0,2 0 z_1,0,0 0 0 0 -z_0,1,2 0 z_0,1,0 z_0,0,2 0 -z_0,0,0 0 0 0 0 0 0 0 0 0 0 0 0; 0 z_1,1,2 -z_1,1,1 0 -z_1,0,2 z_1,0,1 0 0 0 0 -z_0,1,2 z_0,1,1 0 z_0,0,2 -z_0,0,1 0 0 0 0 0 0 0 0 0 0 0 0; z_1,2,1 -z_1,2,0 0 0 0 0 -z_1,0,1 z_1,0,0 0 -z_0,2,1 z_0,2,0 0 0 0 0 z_0,0,1 -z_0,0,0 0 0 0 0 0 0 0 0 0 0; z_1,2,2 0 -z_1,2,0 0 0 0 -z_1,0,2 0 z_1,0,0 -z_0,2,2 0 z_0,2,0 0 0 0 z_0,0,2 0 -z_0,0,0 0 0 0 0 0 0 0 0 0; 0 z_1,2,2 -z_1,2,1 0 0 0 0 -z_1,0,2 z_1,0,1 0 -z_0,2,2 z_0,2,1 0 0 0 0 z_0,0,2 -z_0,0,1 0 0 0 0 0 0 0 0 0; 0 0 0 z_1,2,1 -z_1,2,0 0 -z_1,1,1 z_1,1,0 0 0 0 0 -z_0,2,1 z_0,2,0 0 z_0,1,1 -z_0,1,0 0 0 0 0 0 0 0 0 0 0; 0 0 0 z_1,2,2 0 -z_1,2,0 -z_1,1,2 0 z_1,1,0 0 0 0 -z_0,2,2 0 z_0,2,0 z_0,1,2 0 -z_0,1,0 0 0 0 0 0 0 0 0 0; 0 0 0 0 z_1,2,2 -z_1,2,1 0 -z_1,1,2 z_1,1,1 0 0 0 0 -z_0,2,2 z_0,2,1 0 z_0,1,2 -z_0,1,1 0 0 0 0 0 0 0 0 0; z_2,1,1 -z_2,1,0 0 -z_2,0,1 z_2,0,0 0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,1,1 z_0,1,0 0 z_0,0,1 -z_0,0,0 0 0 0 0; z_2,1,2 0 -z_2,1,0 -z_2,0,2 0 z_2,0,0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,1,2 0 z_0,1,0 z_0,0,2 0 -z_0,0,0 0 0 0; 0 z_2,1,2 -z_2,1,1 0 -z_2,0,2 z_2,0,1 0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,1,2 z_0,1,1 0 z_0,0,2 -z_0,0,1 0 0 0; z_2,2,1 -z_2,2,0 0 0 0 0 -z_2,0,1 z_2,0,0 0 0 0 0 0 0 0 0 0 0 -z_0,2,1 z_0,2,0 0 0 0 0 z_0,0,1 -z_0,0,0 0; z_2,2,2 0 -z_2,2,0 0 0 0 -z_2,0,2 0 z_2,0,0 0 0 0 0 0 0 0 0 0 -z_0,2,2 0 z_0,2,0 0 0 0 z_0,0,2 0 -z_0,0,0; 0 z_2,2,2 -z_2,2,1 0 0 0 0 -z_2,0,2 z_2,0,1 0 0 0 0 0 0 0 0 0 0 -z_0,2,2 z_0,2,1 0 0 0 0 z_0,0,2 -z_0,0,1; 0 0 0 z_2,2,1 -z_2,2,0 0 -z_2,1,1 z_2,1,0 0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,2,1 z_0,2,0 0 z_0,1,1 -z_0,1,0 0; 0 0 0 z_2,2,2 0 -z_2,2,0 -z_2,1,2 0 z_2,1,0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,2,2 0 z_0,2,0 z_0,1,2 0 -z_0,1,0; 0 0 0 0 z_2,2,2 -z_2,2,1 0 -z_2,1,2 z_2,1,1 0 0 0 0 0 0 0 0 0 0 0 0 0 -z_0,2,2 z_0,2,1 0 z_0,1,2 -z_0,1,1; 0 0 0 0 0 0 0 0 0 z_2,1,1 -z_2,1,0 0 -z_2,0,1 z_2,0,0 0 0 0 0 -z_1,1,1 z_1,1,0 0 z_1,0,1 -z_1,0,0 0 0 0 0; 0 0 0 0 0 0 0 0 0 z_2,1,2 0 -z_2,1,0 -z_2,0,2 0 z_2,0,0 0 0 0 -z_1,1,2 0 z_1,1,0 z_1,0,2 0 -z_1,0,0 0 0 0; 0 0 0 0 0 0 0 0 0 0 z_2,1,2 -z_2,1,1 0 -z_2,0,2 z_2,0,1 0 0 0 0 -z_1,1,2 z_1,1,1 0 z_1,0,2 -z_1,0,1 0 0 0; 0 0 0 0 0 0 0 0 0 z_2,2,1 -z_2,2,0 0 0 0 0 -z_2,0,1 z_2,0,0 0 -z_1,2,1 z_1,2,0 0 0 0 0 z_1,0,1 -z_1,0,0 0; 0 0 0 0 0 0 0 0 0 z_2,2,2 0 -z_2,2,0 0 0 0 -z_2,0,2 0 z_2,0,0 -z_1,2,2 0 z_1,2,0 0 0 0 z_1,0,2 0 -z_1,0,0; 0 0 0 0 0 0 0 0 0 0 z_2,2,2 -z_2,2,1 0 0 0 0 -z_2,0,2 z_2,0,1 0 -z_1,2,2 z_1,2,1 0 0 0 0 z_1,0,2 -z_1,0,1; 0 0 0 0 0 0 0 0 0 0 0 0 z_2,2,1 -z_2,2,0 0 -z_2,1,1 z_2,1,0 0 0 0 0 -z_1,2,1 z_1,2,0 0 z_1,1,1 -z_1,1,0 0; 0 0 0 0 0 0 0 0 0 0 0 0 z_2,2,2 0 -z_2,2,0 -z_2,1,2 0 z_2,1,0 0 0 0 -z_1,2,2 0 z_1,2,0 z_1,1,2 0 -z_1,1,0; 0 0 0 0 0 0 0 0 0 0 0 0 0 z_2,2,2 -z_2,2,1 0 -z_2,1,2 z_2,1,1 0 0 0 0 -z_1,2,2 z_1,2,1 0 z_1,1,2 -z_1,1,1 ]) Note that by construction the trace of powers of K are invariants, and we find that g_6:=(K^6) and g_12 :=(K^12) are non-zero, and algebraically independent. Unfortunately, in degree 9 we find that K^9=0, so we must obtain the degree 9 invariant another way. To do this we compute g_9 := (S_9), where S_9 is the Strassen matrix: S_9 = ([ 0 0 0 -z_2,0,0 -z_2,0,1 -z_2,0,2 z_1,0,0 z_1,0,1 z_1,0,2; 0 0 0 -z_2,1,0 -z_2,1,1 -z_2,1,2 z_1,1,0 z_1,1,1 z_1,1,2; 0 0 0 -z_2,2,0 -z_2,2,1 -z_2,2,2 z_1,2,0 z_1,2,1 z_1,2,2; z_2,0,0 z_2,0,1 z_2,0,2 0 0 0 -z_0,0,0 -z_0,0,1 -z_0,0,2; z_2,1,0 z_2,1,1 z_2,1,2 0 0 0 -z_0,1,0 -z_0,1,1 -z_0,1,2; z_2,2,0 z_2,2,1 z_2,2,2 0 0 0 -z_0,2,0 -z_0,2,1 -z_0,2,2; -z_1,0,0 -z_1,0,1 -z_1,0,2 z_0,0,0 z_0,0,1 z_0,0,2 0 0 0; -z_1,1,0 -z_1,1,1 -z_1,1,2 z_0,1,0 z_0,1,1 z_0,1,2 0 0 0; -z_1,2,0 -z_1,2,1 -z_1,2,2 z_0,2,0 z_0,2,1 z_0,2,2 0 0 0 ]) . To match the conventions as in <cit.> we apply the following changes: I_6 := g_6/-108, I_9 := -g_9, I_12 := 1/930·(g_12/108- 41·(g_6/-108)^2) . Then we can use the formula in <cit.>*Theorem 3.1 for the 3× 3× 3 hyperdeterminant: Δ_333= I_6^3I_9^2 - I_6^2I_12^2 + 36I_6I_9^2I_12 + 108I_9^4 - 32I_12^3 . Evaluating the invariants on the forms at <Ref> we find the following values. [ |Ψ_1⟩ |Ψ_2⟩ |D[3,(1,1,1)]⟩ |GHZ_333⟩ |𝒜⟩ |ψ_3⟩ |D_3^2⟩ |D_3^3⟩; |I_6| 0.007245 0.009383 1/27 1/27 1/18 0 0 1/125; |I_9| 7.954· 10^-5 5.712· 10^-5 0 0 √(6)/3888 0 0 0; |I_12| 5.245· 10^-6 4.332· 10^-6 1/23328 = 1/2^5· 3^6 0 1/7776 0 0 1/500000; |Δ_333| 6.243· 10^-16 7.889· 10^-17 0 0 0 0 0 0 ] Note that both |W⟩ and |W_333⟩ at (<ref>) states are nilpotent and hence annihilate all continuous invariants. We repeat what was noted in <Ref> that the Aharonov state |𝒜⟩ (<ref>) maximizes the 3 fundamental invariants, and the states |D[3,(1,1,1)]⟩ (<ref>) and |GHZ_333⟩ (<ref>) are critical points for |I_6|. One may check these values with the code we provide in the ancillary files of the arXiv version of this article. By numerically perturbing each we also checked that none of |Ψ_1⟩, |Ψ_2⟩, |GHZ_333⟩, |ψ_3⟩, |D_3^2⟩, |D_3^3⟩ are critical points of these invariants, however |D[3,(1,1,1)]⟩ is a critical point of Δ_333 and |𝒜⟩ is a critical point of the fundamental invariants. For example, starting from |Ψ_1⟩ we obtain the state |Ψ̃_1⟩ defined below, which has value |Δ_333(|Ψ̃_1⟩)| = 6.9069× 10^-13 that is a near max for Δ_333. |Ψ̃_1⟩= ![ ( 0.039366- 0.023753 i ) |000⟩ +(- 0.111693 - 0.197348 i )|001⟩ +(- 0.122949- 0.208861 i) |002⟩; +(- 0.095285- 0.163765 i) |010⟩ +(- 0.174350- 0.009913 i ) |011⟩ +(- 0.105943 - 0.112670 i)|012⟩; +(- 0.209619 - 0.105469 i) |020⟩ + (0.199823- 0.155145 i ) |021⟩ +( 0.101870 - 0.036988 i )|022⟩; +(- 0.077520 + 0.002270 i )|100⟩ +(- 0.074101 - 0.083099 i ) |101⟩ +(- 0.008464 + 0.125058 i)|102⟩; +(- 0.024819 - 0.278482 i ) |110⟩ +( 0.188483 + 0.174583 i)|111⟩ (- 0.136120 - 0.188541 i) |112⟩; +( 0.043834- 0.109649 i) |120⟩ +(- 0.095310 + 0.006777 i)|121⟩ +(- 0.205072 + 0.188972 i)|122⟩; +(- 0.327100 - 0.115437 i)|200⟩ +(- 0.130088 + 0.126213 i ) |201⟩ + (0.093746 - 0.158298 i) |202⟩; + (0.012939 - 0.056325 i) |210⟩ +(- 0.092719 + 0.003978 i )|211⟩ +(- 0.175832 + 0.043777 i)|212⟩; + (0.082688+ 0.108752 i) |220⟩ + (0.139528+ 0.196955 i) |221⟩ +(- 0.112392- 0.109688 i ) |222⟩. ] § CONCLUSION In this article, we explored the question of maximally entangled states, from the point of view of the evaluation of absolute values of invariants as a measure of entanglement, in the context of 3-qutrit states. We found new states that maximize the absolute value of the hyperdeterminant. We showed that the Aharonov state is a simultaneous maximizer for the 3 fundamental invariants. We provided plots of the frequencies of the values of the invariants on semi-simple states, as well as level-set plots on the sphere, which illustrate their behavior. The idea of using algebraic invariants to measure entanglement has been investigated in the past in the case of qubit gour2014symmetric, chen2013proof but not for qutrit. With the development of Noisy Intermediate Scale Quantum computers, one can ask the question of evaluating those algebraic invariants directly on a quantum device that produces quantum states. The evaluation of the Cayley hyperdeterminant on three-qubit states by means of measurement perez2020measuring, bataille2022quantum opens the path to those similar questions for qutrit systems. In future works, it could be interesting to carry out similar analyses for more qubits or qudits. We anticipate some difficulty in the cases where a Jordan decomposition is not known or if there is not a generic semi-simple element that simplifies the computation and study of the invariants. Also, other challenges are anticipated since in most cases the entire invariant ring is not known, and the hyperdeterminant has high degree and can be difficult to compute. In terms of applications, it could be interesting to produce quantum protocols where the maximally entangled three-qutrit states found in this work exhibit different performances (like the Aharonov state and its role in protocols as explained in the introduction). In this respect we plan to investigate three-qutrit quantum games scenario based on Mermin's like inequalities <cit.>. § ACKNOWLEDGEMENTS Holweck and Oeding acknowledge support from the Thomas Jefferson Foundation. Oeding was also supported by CNRS during part of this work.
http://arxiv.org/abs/2307.02566v1
20230705180505
Planetary evolution with atmospheric photoevaporation II: Fitting the slope of the radius valley by combining boil-off and XUV-driven escape
[ "Lukas Affolter", "Christoph Mordasini", "Apurva V. Oza", "Daria Kubyshkina", "Luca Fossati" ]
astro-ph.EP
[ "astro-ph.EP" ]
Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters Marcel Worring =========================================================================== § INTRODUCTION The analysis of Kepler satellite data has revealed a dearth of 1.9 R_⊕ planets, often referred to as a valley or gap between the two populations of sub-Neptunes and super-Earths <cit.>. Atmospheric escape models had predicted this dearth as an evaporation valley prior to the observational discovery <cit.>. While the properties of the valley are now observationally quite well known, its origin is still debated. The leading hypotheses are XUV-driven atmospheric photoevaporation <cit.> and atmospheric loss driven by the cooling of the core <cit.>. Alternatively, it might also be a direct imprint of formation, separating dry planets that have formed inside the ice line from volatile-rich ones that have migrated in from beyond the ice line <cit.>, or a consequence of impact-driven atmospheric erosion <cit.>. It might also be caused by primordial gas accretion alone <cit.>. But even within the context of just the atmospheric escape models, the details of the atmospheric mass loss driving the evolution, like the different escape regimes (boil-off, blow-off, and Jeans escape) and limiting physical processes like energy- or radiation-recombination-limited escape are still poorly understood <cit.>. This complexity is not surprising as escaping atmospheres in the Solar System <cit.> are also not yet fully understood in spite of in-situ observations due to the complexity of molecular kinetic interactions which include hydrodynamically escaping atmospheres. It is important to note that the main escape mechanisms in the Solar System are of non-thermal nature while for close-in exoplanets, thermal escape mechanisms dominate. Examples of atmospheric escape in the Solar System include hydrodynamic escape in the Earth's H exosphere <cit.>, on early Earth and Venus <cit.>, and on Titan <cit.> and a blend of Jeans and energy-limited escape on the Kuiper Belt Objects <cit.> as well as plasma-driven escape on Mars <cit.> and Mercury <cit.>. While the latter is also conceivable at a close-in irradiated exoplanet system <cit.>, especially given the correlation in X-ray luminosity <cit.>, here we focus on the former mechanism, thermally driven hydrodynamic escape. We note that there is a growing interest in assessing also the atmospheric escape of young bodies like protoplanets where the atmosphere is thought to be sourced directly from a magma ocean <cit.>. Energy-limited escape has long been used to approximate hydrodynamic escape to first order <cit.>. It has the advantage of simplicity, hiding the complex physics in the evaporation efficiency factor η_XUV. Especially in planet evolution calculations, η_XUV is often assumed to be a constant, in contrast to the results of direct hydrodynamic simulations <cit.>. Nevertheless, the energy-limited approximation can fare rather well compared to some extensive kinetic simulations <cit.> up to a critical threshold in the reduced heating rate. Above this threshold, when the escape transitions to a more Jeans-like regime <cit.>, the energy-limited escape approximation overestimates the escape rate by orders of magnitude <cit.>. A second limitation is that at high EUV fluxes, the escape becomes radiation-recombination-limited rather than energy-limited <cit.>. Third, even at intermediate XUV fluxes, the energy-limited approximation is not applicable for planets with a particularly high or low gravitational potential <cit.>. Finally, in the initial evolutionary phase of planets immediately after the dissipation of the natal protoplanetary gas disc, escape of primordial H/He envelopes is driven by a combination of low gravity and high atmospheric temperatures. This leads to very vigorous boil-off <cit.>, which is also neglected in a purely XUV-driven, energy-limited approach. Therefore, in light of recent observations characterising the radius valley <cit.>, we seek to statistically test both the energy-/recombination-limited model and a direct numerical treatment of hydrodynamic escape <cit.> which overcomes the assumptions and limitations of the energy-limited formula, against these observational constraints. We thus here work under the assumption that the valley is a consequence of atmospheric escape. Regarding the observational data, we will make use of the analysis of the California–Kepler Survey in tandem with parallaxes from the Gaia mission <cit.> by <cit.> and <cit.>. The latest Gaia release is essential, as the new parallaxes provide a more accurate determination of planetary radii on a population-wide scale. We also compare to the observations from <cit.> which predates the second GAIA data release, but uses astroseismology to determine accurate stellar parameters. Finally, we also use the valley locus as determined by <cit.> based on short cadence Kepler data. Our paper is organised as follows: in Sect. <ref> we describe our theoretical model and describe the simulation setup in Sect. <ref>. In Sect. <ref>, using a rectangular grid of initial conditions, we compare the locus and slope of the evaporation valley in a radius –period diagram and demonstrate on a population-wide level how the hydrodynamic escape model, but not the energy-/recombination-limited model lead to excellent agreement with the observed slope. The physical reason for this will become clear in Sects. <ref> and <ref> where we study selected individual evolutionary tracks which highlight the differences between the two models, and compare the models on the entire grid, respectively. In Sect. <ref>, we determine the slope of the valley as found with the two evaporation models using now initial conditions derived from the observed Kepler population instead of the rectangular grid. We end our paper with a summary and the conclusions (Sect. <ref>). In Appendix <ref> we address the impact of the post-formation entropy on the valley locus. § MODEL Our approach to modelling planetary evolution under the effect of atmospheric escape is two-fold: on the one hand, we use a simpler semi-analytical energy and radiation-recombination limited escape model of XUV-driven atmospheric photoevaporation. This escape model was already used in the first paper of the series, <cit.>. Here, we slightly update it, as described later in this section. The model is itself based on <cit.>. On the other hand, we now also use the tabulated escape rates obtained with a sophisticated numerical hydrodynamic escape model <cit.>. In both cases, we couple these escape models to our model of temporal planetary interior evolution (cooling and contraction). We then use this model to evolve a population of close-in planets. In both approaches, the interior evolution component of the calculations are performed with the planet evolution model that was described in details in the first paper <cit.>. This evolution model simulates the temporal thermodynamical and compositional evolution of the planet by solving the classical 1D spherically symmetric interior structure equations. The planets consist of an iron/silicate core described with the EOS of <cit.> and a H/He envelope described with the EOS of <cit.>. The atmosphere is described with an improved version of the double grey model of <cit.>, as described in <cit.>. This interior structure model yields, together with the mass loss, in particular the radius of the planet as a function of time as well as the remaining H/He envelope mass. In this work, as stated, we implement a new coupling to the hydrodynamic escape model described in <cit.> that we shall next summarise along with the standard energy-/recombination-limited model that we used previously. §.§ Energy-/recombination-limited escape model Energy-limited (EL) escape assumes that the energy is lost most efficiently by gas expansion to space rather than conduction (downwards) or radiation (upwards to space). An in-depth analysis of the assumptions underlying the EL formalism can be found in <cit.>, while a detailed description of our energy-/recombination-limited escape model is given in . In its simplest form, EL escape is written as Ṁ_EL∼Q(R_a)/U(R_p), where U = G M_p /R_p is the specific binding energy of matter in the potential well of a planet of mass M_p and radius R_p with G the gravitational constant. R_a is the effective radius at which incoming radiation is absorbed on the planet. In our model, the radius where the EUV radiation is absorbed is calculated as described in <cit.>. The complexity then arises in the heating rate Q, which is assumed to occur in the upper atmosphere. For XUV-driven escape, we can thus approximate the energy-limited escape due to upper atmospheric heating as Ṁ_U = η_XUVπ R_p R_a^2 F_XUV/G M_p K_tide, where we assume the only energy absorbed by the planetary envelope cross-section π R_a^2 driving escape is stellar X-ray and EUV radiation (collectively: XUV) which is written as the flux at the planet's position F_XUV. It is further assumed that only a fraction of the total flux of the star drives mass loss, given by the evaporative efficiency factor η_XUV. This efficiency factor is, as discussed above, problematic as it oversimplifies the heating and cooling specific to each planet. Finally, the K_tide factor corrects for the gravity due to the stellar tide described in <cit.>. In contrast to , where a constant η_XUV was assumed, we now use an η_XUV that depends on the escape speed v_esc, as suggested by approximate fits to the mass-loss simulations of <cit.>. The same functional form was also used in <cit.>. For the specific values of the parameters in Eq. <ref>, we use the ones which were found in <cit.> to lead to the best reproduction of the observed Kepler planet population. η_XUV = 0.17 ( v_esc/23 km s)^-0.42 These values are consistent with the ones found in <cit.>. In practice, it is however found that η_XUV remains in the range of 0.1–0.3 because of the small exponent, and because the escape speed does not change by orders of magnitudes. Consequently, the differences to a simulation with a constant η_XUV are very limited. In particular, the slope of the valley is not affected by this modification and remains virtually identical to the value found in with a constant η_XUV. In this paper, it was also found that fixed globally higher or lower values of η_XUV do not affect the slope, but rather shift the valley up and down as a whole. This behavior is in turn in perfect agreement with the predictions of the analytical model derived in (see Eq. 36 in that work). As described in <cit.> and <cit.>, heating by UV and X-rays are treated separately in the model, using the criterion of <cit.> to identify the dominant process. In the radiation recombination-limited (RR) regime <cit.> that occurs at high EUV fluxes, the escape rate is given by the equilibrium of photoionisation with radiative recombination. In this regime, we closely follow <cit.> to calculate the escape rate. The final escape rate is taken to be the minimum of the energy-limited and the recombination-limited escape rates <cit.>. The fact that the numerical results obtained with this evaporation model can be very well understood with an analytical model based on the energy-limited formula only , shows that the importance of the recombination-limited regime is small for the planets studied here. §.§ Hydrodynamic escape model To estimate atmospheric escape within a more sophisticated direct hydrodynamic approach, we employ the grid of planetary upper atmosphere models presented by <cit.>. The grid consists of roughly 7000 models, each corresponding to a planet, and covers the following parameter space: planetary mass (M_p) between 1 and 39 Earth masses; planetary radius (R_p) between 1 and 10 Earth radii; planetary equilibrium temperature (T_eq) between 300 K and 2000 K; stellar mass between 0.4 and 1.3 solar masses; and XUV flux between 0.4 and 10^4 the one experienced by present Earth because of solar irradiation, with values scaled for the specific stellar masses. The range of orbital separations covered by the grid was set on the basis of the stellar mass and planetary equilibrium temperature, thus stellar radius (R_*) and effective temperature (T_eff). R_* and T_eff were derived considering the range of radii and effective temperatures covered by a star of each considered mass along the main-sequence on the basis of stellar evolutionary tracks <cit.>. Considering all stellar masses, the orbital separation ranges between 0.002 and 1.3 AU. The basic hydrodynamic model used to construct the grid is an updated version of the model developed in <cit.>. It considers a pure hydrogen atmosphere subject to heating and cooling processes, including radiative Ly-α cooling following <cit.> and H_3^+ cooling following <cit.> as well as adiabatic cooling <cit.>. These cooling processes are not explicitly included in the energy-limited approximation but are (incorrectly) supposed to be captured in the (constant) efficiency factor η_XUV introduced in Sect. <ref>. The model numerically solves a full set of hydrodynamic equations, including energy and momentum conservation laws and continuity equations accounting for the full atmospheric hydrogen chemistry comprising dissociation, recombination, and ionisation. The complete list of chemical reactions is given in <cit.>. The model does not account for the presence of metals, which could induce additional heating and/or cooling that have been shown to be effective for ultra-hot primary <cit.>, secondary <cit.> and rock vapour <cit.> atmospheres. However, conditions at planets in the grid are such that condensation may occur in the lower atmosphere, limiting the penetration of heavy elements in the upper atmosphere <cit.>. At the early evolution stages, when the extreme atmospheric escape takes place, the small amount of metals in a hydrogen-dominated upper atmosphere will be dragged away by the hydrogen outflow and will have little impact on the atmospheric mass-loss rates <cit.>, while at the later stages, metal abundances can be fractionated by more moderate hydrodynamic escape or non-thermal escape processes <cit.>. Even more importantly, one has also to consider that metal abundances may vary significantly between individual planets, already from the formation stage. Therefore, our current knowledge remains limited regarding the precise composition of sub-Neptunes, and the uncertainty in metal abundances does not enable one to place reasonable assumptions valid for planets spanning over a wide parameter space. Furthermore, the possibility to include metals into consideration is limited from the practical numerical side; the computational costs of the hydrodynamic models allowing for a proper metal treatment (i.e., including a detailed chemical framework and a photoionisation treatment including the explicit calculation of the energy levels populations) are at the moment still too high for computing a large and dense grid of mass loss rates similar to that used in the present study. The boundaries of the model are the photospheric radius of the planet (lower boundary) and its Roche lobe (upper boundary) R_roche = a [ M_p/3 (M_p + M_*)]^1/3, where a is the planet's orbital distance and M_* the stellar mass. The model accounts for stellar heating in two wavelength intervals: EUV and X-ray ranges, assuming that the integrated flux of each range is emitted at a single wavelength (60 and 5 nm, respectively). The heating is included into the energy conservation equation as an external source given by Q_m = 1/2η^*_XUV σ_m n_H∫^π/2+arccos(1/r)_0J_m(r,θ) ·sin(θ) dθ, where m stands for either X-ray or EUV radiation, σ_m is an absorption cross-section of hydrogen for the specific wavelength, n_H is the hydrogen (H + H_2) density, r is the distance to the planetary centre, and J_m(r,θ) is a function with spherical coordinates describing the spacial variations of the XUV flux due to atmospheric absorption J_m(r, θ) = exp(-∫^R_roche_rσ_m n_H(ξ) √(ξ^2 - r^2 sin(θ)) ξ dξ), which is approximately equivalent to the optical depth at θ = 0. We note that the η^*_XUV in Equation <ref> for the hydrodynamic model is not the same as η_XUV given by Equation <ref> for the energy-limited model. This is because η^*_XUV does not account for any additional cooling processes or other physical mechanisms supposed to be captured by (or hidden in) η_XUV, as they are included self-consistently in the hydrodynamic model <cit.>. Instead, η^*_XUV accounts solely for the efficiency of the photoionisation heating, and is not an overall evaporation efficiency as η_XUV in the energy-limited model. Given that a self-consistent calculation of η^*_XUV is currently too time-consuming for computing a large grid, it was set to be equal to a constant value of 15%, which is a reasonable assumption for the considered M_p range <cit.>. We note that despite this compelled simplification, the hydrodynamic code remains a superior model relative to the energy-limited approximation, as the latter approach relies on many more assumptions than just a constant heating efficiency and the absence of the explicitly modelled radiative cooling processes. In particular, it omits the contribution from the thermal energy of the planet atmosphere and the stellar VIS/IR irradiation (as discussed below) and makes crude assumptions on the atmospheric structure, which is calculated self-consistently by the hydrodynamic model <cit.>. Typically, for hydrodynamic planetary/stellar wind models, the initially subsonic outflow (we set the bulk velocity V_bulk equal to zero at the lower boundary) is accelerated to supersonic velocities before the flow reaches the Roche lobe. Within our grid of models, it happens typically at a distance of a few planetary radii. To ensure that the atmospheres of planets in the grid remain collisional throughout the simulation, we calculate the Knudsen number for each point of the atmospheric profiles a posteriori. The atmospheric mass-loss rate is finally defined as the flow through the sphere of radius r in a unit of time (m_Hn_H(r)V_bulk(r)) multiplied by the surface of this sphere. As the outflow is continuous, for the computation of the mass-loss rate the specific distance r is not relevant (except for the small region at the lower boundary), but for convenience it is taken at the Roche radius. The predictions of our model are comparable to those made by other hydrodynamic models, including the more sophisticated ones (such as those calculating self-consistently the heating efficiency in various approaches as and , and models accounting for the detailed spectral energy distribution as , or 3D geometry as ). Further details about the physical model and the grid, including the comparison to observations and to the results of other literature models, can be found in <cit.>. By construction, the hydrodynamic model accounts for Jeans escape, XUV hydrodynamic escape, and boil-off escape regimes. This is, as we shall see below, of central importance for the results for the valley found here. The model also transitions smoothly from one escape regime to the other depending on the system parameters. To ease distinguishing between the latter two regimes, it is convenient to employ the restricted Jeans parameter <cit.>, which is a combination of the physical planetary parameters and is defined as Λ = G M_p m_H/k_B T_eq R_p, with m_H the mass of the hydrogen atom and k_B the Boltzmann constant. Planets with a Λ smaller than 15–35 are in the boil-off regime, where the escape is driven by the atmospheric thermal energy and low planetary gravity <cit.>. The specific critical value depends on the stellar mass and orbital separation. Such planets are typically just released from the protoplanetary gas disk. A Λ of 20 is for an isothermal gas identical to the condition derived by <cit.> for the occurrence of boil-off, namely that the planet radius is larger than about 0.1 times the Bondi radius. For the boil-off regime, it is crucial that the hydrodynamic model also accounts for the stellar continuum (dominated by VIS/IR) heating that can drive escape, in contrast to the energy-/radiation-limited model that is driven by XUV heating only. The continuum heating is implicitly included by fixing the temperature at the lower boundary equal to T_eq. We have verified that the photospheric temperatures of our model planets as predicted by the interior structure model is always very close to T_eq. The largest difference (a temperature that is about 4% higher) occurs for the most massive planets we model at the beginning of the simulations, which is due to the contribution of the intrinsic luminosity. Overall, the difference is, however, much smaller and generally less than 1%. Studies comparing the hydrodynamical model used here with the energy-limited escape have found the following <cit.>: For planets with Λ less than about 20, the energy-limited formalism on average severely underestimates mass-loss rates, because it lacks the continuum (VIS/IR) heating. For higher Λ, the energy-limited rate provides an upper limit on the mass-loss rate, with significant overestimations possible depending on a planet's gravitational potential. Model outputs for each planet in the grid are profiles of the main atmospheric parameters, which allow deriving the effective radii of the stellar XUV absorption, and atmospheric escape rates. To finally obtain the mass-loss rates for any planet during its evolution, we linearly interpolate among the grid points. Our interpolation scheme is simpler than the one in <cit.>, but allows to fully exploit all grid data including the borders of the tabulated regions. §.§ Stellar XUV luminosity as a function of time A modification of our theoretical model relative to is the usage of a more recent description of the stellar XUV luminosity as a function of time. In , we used the data of <cit.>. In the updated model, we use instead <cit.>. These authors compiled observationally derived relations extracted from several studies <cit.> of the X-ray luminosity of stars as a function of time and stellar type. We use their mean X-ray luminosity as function of time L_X(t) and convert it into the extreme UV-luminosity L_EUV(t) with the relation of <cit.>. Figure <ref> shows L_X, L_EUV, and the bolometric luminosity L_bol of a 1 M_⊙ star in our model. At young ages, the XUV luminosity is on the order of 10^-3 the bolometric luminosity, as expected <cit.>, and approximately constant in time, except for a certain drop at around the time (40 Myr) when the star reaches the main sequence. Afterwards, it decreases approximately following a power law. At 4.6 Gyr, the predicted L_X is compatible with the Sun's measured L_X at its activity maximum. We compare our model with the one of <cit.>. They calculated the L_X predicted for stars on the 10th, 50th, and 90th percentiles of the stellar rotational distribution. We see that our relation is similar to their 50th percentile case. At the earliest epochs, our L_X is about a factor 2 lower than theirs, while at high ages, the fall off is a bit faster in <cit.>. The impact of varying the stellar XUV on the locus of the evaporation valley was studied in (see also ). § PROCEDURE We use the model to evolve a large number of close-in low-mass planets. To set their initial (post-formation) properties, we follow two approaches: a rectangular grid, and initial conditions derived from the planetary population detected of the Kepler satellite. The rectangular grid allows us to see clearly (but also under idealised assumptions) the population-wide imprints of the two evaporation models. The initial conditions derived from the Kepler population give us an understanding if these imprints remain visible also when the initial conditions are more complex, in particular when there is a spread in the post-formation envelope mass at a fixed core mass. §.§ Rectangular grid of models and initial conditions Our first approach is the same as in : for the two escape models, we simulated a rectangular grid of 6000 planets each, equally spaced in semi-major axis a and core mass M_core ranging from 0.01 to 0.6 AU in 0.01 AU increments and from 1 to 20 M_⊕ in 0.2 M_⊕ increments. We let these planets evolve from 3 Myr (a typical lifetime of protoplanetary disks, ) to 10 Gyr around a 1 M_⊙ star. With this data, we can analyse the slope and temporal evolution of the evaporation valley. The initial (post-formation) H/He envelope mass M_env,0 was estimated as M_env,0 = 0.024 M_⊕ ( M_core/1 M_⊕)^2.23( a/1 AU)^0.72. As described in , this relation was found as a typical mean value from planet formation simulations by <cit.> based on the core accretion paradigm which find the envelope mass similarly as in <cit.>, but include many additional effects like orbital migration and disk evolution. The results of employing the XUV-driven energy-/recombination-limited escape model indicate, however, that using a different initial envelope mass within plausible ranges should not strongly influence the location of the valley. An additional argument for this weak dependency in the context of boil-off is that if the initial envelope mass is larger, then the radius is larger and thus the boil-off escape is larger as well. Therefore, at the end of the boil-off phase, an initially larger and an initially smaller planet end up with a similar radius because the larger planet had a stronger escape compared to the smaller one <cit.>. We nevertheless investigate the impact of the initial envelope mass further in Sect. <ref>. The initial intrinsic luminosity of the planets was also estimated as a function of core and envelope mass based on the same formation simulations. This is the same approach as in . In Appendix <ref> we study the impact of different post-formation luminosities/entropies, finding an only weak influence on the valley locus. Regarding their bulk composition, all cores have an Earth-like composition with a 2:1 silicate-to-iron mass fraction. Such a composition is in agreement with the composition of planets below the valley <cit.>. §.§ Initial conditions derived from Kepler survey The initial conditions on the rectangular grid are not tuned to reproduce the observed Kepler planet population, which makes the comparison with observations less straightforward. The grid also assumes in an idealised way that there is a unique value of the initial envelope mass as a function of core mass and orbital distance. Formation models <cit.>, but also inference analyses of the post-formation properties of the Kepler planets <cit.>, indicate in contrast a spread in post-formation envelope masses. Our second approach for the initial conditions is thus to adopt distributions for the orbital period, core mass, and initial envelope mass that have been derived from fitting the observed properties of the close-in low-mass population found by the Kepler satellite through inference analyses <cit.>. In these works, the core and initial envelope mass distributions that were derived lead — after evolution under the effects of core-powered mass loss and photoevaporation, respectively — to a synthetic population that agrees with the observed period-radius distribution (the CKS data, ). The period distribution is also derived from the Kepler observations and given as <cit.> dN/dlog P∝ P^2, P < 8 days, constant, P > 8 days. The core mass distribution we adopt is the one inferred in <cit.> in their preferred Model III. It peaks at a core mass of about 4 M_⊕, with a tail extending to about 100 M_⊕. The post-formation envelope mass fraction is also taken from this source. It is a distribution peaking at an envelope mass fraction of about 4%, but covering a significant range. In contrast to the theoretical relation (Eq. <ref>), the envelope mass fraction is here an independent quantity. Both these distributions are shown in the left and middle panel of Fig. <ref>. With these initial conditions, we calculated the evolution of 37242 and 37416 planets from 3 Myr to 10 Gyr for the energy-limited and hydrodynamic evaporation model, respectively. To understand if the imprints of the different evaporation models remain observable, we apply a simple synthetic detection bias of the Kepler satellite to the model output. In this way, we get the detectable synthetic population. For each synthetic planet, we compute the detectability as a function of planet size and orbital period. It has two components. The first component is the geometric transit probability p_tr. For it, following <cit.>, we use that a randomly inclined planet on a circular orbit transits with an impact parameter b < 0.9 with a probability p_tr = 0.9 R_⊙/a, where R_⊙ is the radius of the Sun (we only consider 1 M_⊙ stars in this paper). The second component is the detection probability p_det, which depends mainly on the S/N of the observations. Here we take the average p_det also from <cit.>, which is based on the transit injection and recovery study of <cit.>. This is shown in the right panel of Fig. <ref>. The total probability is then the product of the two probabilities, p_tr× p_det. By comparing p_tr× p_det with a random number drawn from the standard uniform deviate, we obtain the detectable synthetic planets. To have enough detectable synthetic planets despite the low detection probability of the transit method, we oversample 100 times, i.e. we run through the list of synthetic planets 100 times, obtaining each time different detectable planets. This means that the same planet can end up several times in the final list of detectable planets. However, for the statistical analysis at hand, this is not an issue. In this way, we end up with 114634 and 113465 detectable synthetic planets for the energy-limited and hydrodynamic model, respectively. The initial condition distributions derived in <cit.> lead with their relative theoretical forward models to synthetic populations agreeing with the Kepler data. However, the distributions that the two works infer differ from each other. This reflects that these `fitting' initial conditions are also a function of the forward model. Here, we use again another forward model (or even two, counting the two different evaporation models). Thus, we cannot expect that we will find with our forward model in the end a detectable subpopulation agreeing equally well with the actual Kepler population. As we will see in Sect. <ref>, our synthetic detectable populations do, however, still share key properties with the observed population, like for example a bimodal radius distribution. We could, in principle, conduct a similar hierarchical inference process as <cit.> to derive our own fitting initial conditions. However, practically this would be difficult because of the much higher computational cost of our forward model compared to theirs. It would also be beyond the scope of this paper, which addresses the comparison of two evaporation models. §.§ Quantifying the locus of the valley Following the approach of several previous papers <cit.>, we quantify the valley locus with a power law. Normalising at an orbital period of 10 days, we express for the rectangular grid simulations the planetary radius R_p of the largest bare core (i.e., most massive planet which has completely lost its H/He envelope) at a given orbital period p as R_b(p) = R̃_b( p/10 days)^α where R̃_b is the value at 10 days and the slope is α = dlogR_p/dlogp. Using this definition, we are consistent with previous works on the same topic. A power law dependency has also been analytically found by several theoretical works, for example by <cit.> and for photoevaporation or by <cit.> for core-driven escape. These works show that the slope of the valley is a good indicator for the dependence of the evaporation rate on the planets' distance from the host star and therefore the underlying evaporation mechanism. It is important to specify that for the results obtained with the rectangular grid of initial conditions, R_b(p) is the lower boundary of the observed valley. In contrast, observational studies, but also our theoretical results obtained for initial conditions derived from the observed Kepler planets, report the middle of the valley. For the initial conditions derived from the Kepler survey, the absence of a completely empty gap (or a well-defined largest bare core as a function of period as in the rectangular grid) means that we cannot as simply quantify the valley position as for the rectangular grid. On the other hand, compared to the observed population, where various statistical methods must be used like support vector machines <cit.> to determine the valley position and its slope, we are in the advantageous position to have a very large data set. We have therefore proceeded in the following simple way to determine the middle of the valley: We have binned the planets according to orbital period, with a bin width of 0.2 dex in logP, with partially overlapping bins at logP = 0.6, 0.7, …1.7, 1.8 (or 1.9 for the unbiased case). For each bin, we represent the radius distribution with a kernel density estimate, and get the position of the gap centre (local minimum in the radius distribution) from the zero point of the derivative. This procedure is similar to the one employed by <cit.>. Finally, to determine R̃_b and α from the simulations, we simply make, as in , a least-square power law fit to the largest bare core radius (for the rectangular grid) respectively gap centre (for the Kepler initial conditions) as a function of orbital period at a given age, typically at 5 Gyr. § RESULTS FOR THE RECTANGULAR GRID We present our planet evolution simulations by first examining the locus of the valley in the period–radius diagrams as predicted by the two evaporation models for the rectangular grid of initial conditions in Sect. <ref>. By performing case studies on individual planets in Sect. <ref>, we are then able to identify the cause of the distinct valley slopes. We then extend this analysis to the entire grid (Sect. <ref>). §.§ Period-radius diagrams Figure <ref> shows the simulated grid in the orbital period–transit radius plane for the two escape models at an age of 5 Gyr. Clearly visible in both is the evaporation valley running diagonally downward, i.e. the gap in radius between the super-Earth planets whose envelopes have fully evaporated and the sub-Neptunes which still have an envelope. For the planets still possessing H/He, the colour code shows the fraction of the initial H/He envelope that has evaporated. The closer to the valley, the higher this fraction, as expected. With the divide being very distinct, a good fit of the slope can be achieved. We note that there are some quasi-regular patterns emerging in the dots above the valley, and in the hydrodynamic model, some linear patterns are visible. These patterns are simply the result from the regular grid, and, in the case of the hydrodynamic simulations, also consequences of the interpolation in the grid of tabulated evaporation rates. For the hydrodynamic model, this also translates in the largest bare core as a function of period not being a completely smooth power law function, as it is the case for the energy-/recombination-limited model. However, also for the hydrodynamic model, the simulations of the individual planets presented below exhibiting clear, physically understandable outcomes that are not dominated by interpolation artifacts, which, together with the small scales of the non-smoothness of R_b, mean that the interpolation does not significantly affect the quantities we are interested in (R̃_b and α). In both models, there is a region with an absence of (sub-)Neptunian planets (i.e., planets with H/He) at periods smaller than 2 or 3 days. This corresponds to the evaporation (also called the sub-Neptunian) desert <cit.>. It should be noted that the specific period marking the onset of the desert in our simulations shown here depends also on the fact that the most massive core we simulate has a mass of 20 M_⊕. The minimum period is thus model dependent. To quantify the valley locus, we only considered the planets having the smallest period for which there are still planets with an envelope. The comparison of the two panels shows that while both models lead to a very similar position of the valley at 10 days R̃_b of about 1.8 to 1.9 R_⊕, they differ in the slope, i.e. in α as is directly visually apparent. In the energy-/recombination-limited model, the slope is clearly steeper than in the hydrodynamic model. This is one of the key outcomes of this study. In both panels, the magenta lines show the aforementioned power law fit to the largest, numerically found bare core as a function of period. It makes the shallower slope in the hydrodynamic case even more apparent. In Table <ref>, we report the parameters of these fits, R̃_b and α, together with the results of the observational studies of <cit.>, <cit.>, and <cit.>. The data confirms the visual impression: the difference in R̃_b (1.84 and 1.88 R_⊕ in the energy-/recombination-limited and hydrodynamic model, respectively) is very small. This difference is comparable to, or smaller than, the observational error bars, and thus not significant. Our theoretical values for R̃_b are for the bottom of the valley, while the two observational studies are for the middle. Correcting for this difference would shift the theoretical R̃_b to larger values by about 0.2–0.3 R_⊕ <cit.>, i.e. to a radius between 2.0 and 2.1 R_⊕. This is larger than the nominal observational values of about 1.9 R_⊕. This could be an indication that both models overestimate escape. A possible explanation for this could be that the interior structure and the escape models do not account for metals. Because the planets considered here orbit late-type stars, metals do not cause much heating, but may lead to significant cooling, which would lower the escape <cit.>. A metal-enriched instead of a pure H/He envelope would also affect the interior structure model via the equation of state and the opacities. The impact of enriched envelopes was studied in . It was found that a gas with a metal mass fraction of 10% and 30% would lead to a downward shift of the valley of about 0.1 and 0.2 R_⊕. This would bring the theoretical predictions into better agreement with the observations. Such enrichments could be a consequence of the formation process <cit.> or result from evolutionary magma-hydrogen interactions at the core-envelope interface <cit.>. In contrast to R̃_b, the values of the slope α differ clearly between the two theoretical models: in the energy-/recombination-limited model, a slope of 0.18 is found, while for the hydrodynamic model, the slope is 0.11. This corresponds to a fractional difference of about 50%, a difference that is clearly observationally relevant, given the error bars reported in the observational studies. The α found in the updated numerical energy-/recombination-limited model used here is identical to the one derived analytically for a purely energy-limited model with a constant η_XUV derived in . This shows that the introduction of an escape velocity dependent η_XUV does not significantly affect the results. Comparing with the observationally inferred values of α which are -0.09^+0.02_-0.04 in <cit.> and -0.11 ± 0.02 in both <cit.> and <cit.>, we see that the energy-/recombination-limited model yields with 0.18 a slope that is clearly too steep by more than two or three sigma. The hydrodynamic model in contrast predicts a slope that is in excellent agreement with these observations. The finding that a simple energy-limited model yields a steeper slope than a hydrodynamic model is not new: <cit.> had already found an α = -0.25 for their simple energy-limited model with a constant efficiency factor, while their full hydrodynamic evaporation model <cit.> without these assumptions yielded α = -0.16. Both these values are, however, steeper than the observed values, in contrast to our new findings with the <cit.> hydrodynamic model. Another theoretical energy-/recombination-limited XUV photoevaporation model <cit.> found with α = -0.15 also a steeper slope than observed. This makes it worthwhile to further investigate the result found here. The <cit.> model predates studies like <cit.>, which found the boil-off phase as the first escape regime occurring after the dissipation of the disk. Instead, their initial evaporation regime is X-ray driven, in contrast to our hydrodynamic model where boil-off is included. We report here also the masses of the largest bare core as a function of period in the two models, found like the radii with least-square power law fits. This is an information of interest for radial velocity studies. One finds for the energy-/recombination-limited model M_b,enRR(p) = 9.6 ( p/10 days)^-0.64 M_⊕ and for the hydrodynamic model M_b,hyd(p) = 10.6 (p/10 days)^-0.41 M_⊕. These values essentially reflect the mass–radius relation of silicate (MgSiO_3)-iron planets. The analytical model of for an energy-limited model predicts for comparison that the exponent should be -0.66. This is very close to the value obtained numerically in the present paper (-0.64). In the hydrodynamic model, the exponent is as expected significantly lower (-0.41). §.§ Evolution of specific cases The fact that R̃_b is similar in the two models while the valley is clearly shallower in the hydrodynamic model means that the hydrodynamic model does not generate as large bare cores inside the 10-day period line as the energy-/recombination-limited model does. The opposite is true outside the 10-day period. This is quickly verified in Fig. <ref>. Therefore, to understand the reasons for the different slopes, it is helpful to study two cases of individual planets. The first one is a far planet at a period of about 133 days, which remains a sub-Neptune in the energy-/recombination-limited model, but becomes a super-Earth in the hydrodynamic model (Sect. <ref>). The second one is a close planet at 3 days, which becomes a super-Earth in the energy-/recombination-limited model, but stays a sub-Neptune in the hydrodynamic model (Sect. <ref>). In other words, we study cases that end up on opposites sides of the valley in the two models in such a way that the slope is shallower in the hydrodynamic model. We select cases that are close/at the upper boundary of the super-Earths for the two distances. In Fig. <ref>, these individual planets are shown with squares (black for the close case, lilac for the far case). §.§.§ Distant planet Figure <ref> shows the temporal evolution of the evaporation rate, restricted Jeans parameter, transit radius, and total mass of a planet at an orbital distance of 0.51 AU (period of 133 days) with an initial total mass of approximately 3.86 M_⊕. The planet consists of a 3.6 M_⊕ silicate-iron core and a 0.26 M_⊕ H/He envelope. The important result here is that in the hydrodynamic model, the planet completely loses its H/He envelope at about 2.5 Gyr, transforming the planet into a super-Earth, whereas in the energy-/recombination-limited model, the planet keeps about 60% of the initial envelope till the end of the simulations and thus remains a sub-Neptune. The final radii (at 5 Gyr) in the two cases are about 1.4 R_⊕ and 2.8 R_⊕ (the former equal to the core radius), typical for planets below and above the evaporation valley. The reason for this different evolution can be seen in the evaporation rate (top left panel in Fig. <ref>). We see that the hydrodynamic model initially predicts a much higher evaporation rate. At the very beginning, the evaporation rate is more than 2 orders of magnitude higher. It remains larger than the one in the energy-/recombination-limited model to about 40 Myr. The reason for the higher evaporation rate can be seen in the Jeans-escape parameter (top right panel): we see that initially, Λ is about 12. This puts the planet firmly into the boil-off regime <cit.>, which leads to the very high escape rates in the hydrodynamic model. During this time, the stellar continuum irradiation (mainly in VIS/IR), rather than the XUV irradiation, catalyses the escape. A tell-tale sign of this is the local maximum in the escape rate in the hydrodynamic model at about 30 Myr, which is caused by the local maximum of the star's bolometric luminosity (see Fig. <ref>). During boil-off, the planet loses about half its H/He envelope in the first 3 Myr. The escape gradually transitions into XUV-driven escape at about 30–50 Myr. By this time, the radius of the planet has shrunk to a size that is comparable to one tenth of the Bondi radius (which is about 40 R_⊕), as predicted by <cit.>. By this time, Λ has increased to about 30. At later times, the hydrodynamic model predicts a comparable or somewhat smaller escape rate than the energy-/recombination-limited model, as can be seen in the top left panel. However, by that time, the properties of the planets (namely the radii) have already diverged significantly between the two models, thus it is difficult to compare them. We will do this later when we compare the escape rates at fixed planet properties. To summarise, we see that for these distant low-mass planets, the hydrodynamic model predicts that comparatively more massive planets become super-Earth than in the energy-/recombination-limited, shifting the valley to larger radii because of boil-off. This evaporation regime is included in the hydrodynamic model, but not in the energy-/recombination-limited one, which makes the difference. This planet is actually a typical example of the first category of planets where the energy-limited approximation consistently fails by under-predicting the escape rate <cit.>: planets with low-to-intermediate XUV irradiation and low gravitational potential. §.§.§ Close-in planet The second case, a close-in massive planet, is shown in Fig. <ref>. This is a planet at 0.04 AU (orbital period of about 3 days), with an initial total mass of about 20.9 M_⊕. The initial envelope mass is 1.72 M_⊕. This may seem a small envelope mass for the significant core mass, but it is a consequence of the also very small orbital distance, that reduces in formation simulations the ability to accrete gas (see Eq. <ref>). Here, the key result is that the outcome is opposite to the distant case. As a matter of fact, in the energy-/recombination-limited model, the envelope is completely lost by 800 Myr. Instead, in the hydrodynamic model, the planet keeps an envelope to the end of the simulation at 5 Gyr; see Fig. <ref>. The remaining envelope mass at this time is actually tiny, but sufficient to lead to a radius of about 2.2 versus 2.5 R_⊕ for the two cases. In the case we study here, the difference of the two models is by construction small, since we have chosen a case that is just above/below the valley in the two models. The reason why the models lead to such different outcomes can be seen in the top left panel of Fig. <ref>. We see that as in the case of the distant planet, the hydrodynamic model initially predicts a stronger mass loss, which is again due to boil-off. However, compared to the far case, the boil-off is here less extreme, leading to a difference in the evaporation rates of initially a bit more than one order of magnitude. This is due to the fact that as the planet already starts with a higher Λ of about 17. Already after about 10 Myr, the evaporation rates become comparable in the two models, and after about 30 Myr, the escape rate in the energy-/recombination-limited model is consistently higher than in the hydrodynamic model. Neither the radii nor the masses (dominated by the core mass anyway) of the planets differ strongly at this point. The similar M_p, R_p, and the identical T_eq in the two simulations imply that Λ is also similar[It generally holds that decreasing R_p implies decreasing the evaporation rate Ṁ and increasing Λ, while decreasing M_p implies increasing Ṁ and decreasing Λ, so decreasing both M_p and R_p compensates to some extent (though the escape is more sensitive to variations in radii).]. Therefore, one can directly compare the evaporation rates in the two models. The lower rate in the hydrodynamic model is thus not merely a consequence of different planet properties, but a genuine consequence of the more complex physics included in the hydrodynamic model, and more specifically in the different temperature structure compared to that assumed in the energy-limited approximation, as we shall further discuss in the following section. The difference in the predicted escape rates is not very large (factor 2 to 3), but integrated over time, this is sufficient for complete evaporation in the energy-/recombination-limited, but not the hydrodynamic model. The underlying reason is that this planet is a typical example of the second regime where the energy-limited approach consistently fails by over-predicting the escape rate <cit.>, namely planets characterised by high XUV irradiation and high gravitational potential. §.§ Comparison of the escape models on the entire grid The previous section demonstrated the different outcomes of two selected planets using the two escape models, and which effects played a significant role. We now generalise this comparison to the entire grid of planets we simulated <cit.>. Figure <ref> compares the two models at an age of 50 Myr. At this time, boil-off in the hydrodynamic model will already have ceased, but strong XUV-driven evaporation (because we are still at early times) will be ongoing. Four panels are shown in the period–radius plane, colour-coding different quantities. In Panels (a), (b), and (c), the results of the energy-/recombination-limited model are shown with coloured dots, in Panel (d) they show the results of the hydrodynamic model. In all cases we see the valley, which is, however, not yet at the same position as in Fig. <ref>, which shows the situation at 5 Gyr. Panel (a) shows colour-coded the ratio of the escape rate predicted by the hydrodynamic model over the escape rate predicted by the energy-/recombination-limited model, Ṁ_hyd / Ṁ_enRR. The latter is the rate that is actually used to model the evolution of the planets shown in the panel. The former is merely calculated given the properties of the planets at this moment, but is not used for the evolution. This allows to compare the two models at fixed planet properties which was not easily possible in the analysis of the two individual cases. The plot reveals the two aforementioned shortcomings of the energy-/recombination-limited approximation. The first regime is shown by the blue points: for close-in compact and massive planets with high gravitational potential exposed to high XUV irradiation, the energy-/recombination-limited model overestimates the escape rate relative to the hydrodynamic model. This is shown in Panel (b), which colour codes the gravitational potential for the energy-/recombination-limited case. The reason for the incorrect results of the energy-limited approximation in this particular regime has been described in <cit.>: the assumed thermospheric temperatures underlying the energy-limited approximation are much higher than the ones found when directly solving the governing equations in the hydrodynamic model. The discrepancy is a consequence of the lack of downward heat conduction underlying the energy-limited approximation, leading to excessively high temperatures and thus loss rates. In reality, in the dense atmospheres at the XUV absorption height of planets with high gravitational potential, thermal conduction is a significant process, leading to lower temperatures and thus escape rates. The second regime is shown by red and yellow points: for planets with low-to-intermediate XUV irradiation and low gravitational potential, the energy-/recombination-limited model underestimates the escape rate relative to the hydrodynamic one. Here, the energy-limited approximation fails again because of the incorrect assumed temperature structure <cit.>: for such planets, boil-off is the dominant escape mechanism. However, the energy-limited approximation implicitly neglects thermal energy already available in the atmosphere resulting from optical and infrared stellar irradiation. When Λ is low (less than about 30), this thermal energy is comparable to the binding energy, leading to boil-off. It is more intense for planets with lower masses, while planets more massive than approximately 10 M_⊕ are less affected <cit.>. The impact of boil-off is illustrated by Panels (c) and (d). In Panel (c) we see that the regime where the hydrodynamic model predicts significantly higher escape rates corresponds to the planets with the lowest Λ values. For them, boil-off and rapid mass loss would occur in the hydrodynamic model, but this is neglected in the energy-/recombination-limited approximation. This strong mass loss rapidly reduces the planet radius, increasing Λ until it approaches about 30, when boil-off stops. This explains what is seen in Panel (d): here, in the hydrodynamic model, there are no planets with Λ less than about 30, because the excess envelope mass has already boiled-off by 50 Myr (or even much faster, as we saw for the two individual cases studied above). Apart from these two regimes of discrepancies, there is also a significant part where the two models yield similar escape rate (light blue - cyan - green colours in panel a). The discrepant regimes, are, however, the ones setting the valley slope. § RESULTS FOR INITIAL CONDITIONS DERIVED FROM KEPLER OBSERVATIONS The goal of this part is to understand if the main result obtained with the rectangular grid of initial conditions — the distinct slopes of the valley — also persist if we use more complex (and more realistic) distributions of the initial conditions, and apply a synthetic detection bias of the Kepler satellite survey to the model output. For this, we analyse the synthetic planet population obtained with the initial conditions described in Sect. <ref>, considering both the biased and the unbiased data. The top row of Fig. <ref> shows the scatter plot of transit radius as a function of orbital period for the two evaporation models at 5 Gyr. No detection bias was applied. Compared to the equivalent plot for the rectangular grid of initial conditions (Fig. <ref>), we see a number of similarities, but also differences. Similarities are the scarcity of close-in planets with large radii in the top left corner of the plots (the evaporation desert), and the presence of the evaporation valley running diagonally downward. An important similarity regarding the main subject of the paper is that the slope of the valley is steeper for the energy/recombination-limited model than for the hydrodynamic model. We quantify the slopes in the following. We also see the following differences: the period distribution is by construction different, with an increase in planet frequency from 1 to about 8 days of period, followed by a distribution constant in logP. This simply reflects the initial conditions (Eq. <ref>). A more important difference concerns the presence of an overdensity of planets in the region immediately above the valley. This populates the sub-Neptunian peak in the radius histogram. At even larger radii (≳ 3.5 R_⊕) the frequency of planets drops strongly (the cliff, ). Both these features are important aspects of the observed planet distribution (e.g., ), but were absent in the rectangular grid. We see that with the inferred core and envelope mass distribution of <cit.>, we find these features also with our different forward (escape and interior structure) model. As a last difference we see that the valley is not fully empty, but contains some planets. There are two types of planets in the valley: First, massive bare cores (≳ 20-30 M_⊕) that started with very small post-formation envelopes (0.01 M_⊕ or less), such that they were fully evaporated despite the large core mass. These planets populate the gap from `below' and are dominant in the lower half of the depleted gap area. Second, lower mass planets that are in the process of losing the final part of their envelope. They only contain less than ∼ 1% of their initial envelope mass. These planets populate the gap from `above' and dominate in the upper half of the gap. Our procedure to obtain the gap locus (centre) was described in Sect. <ref>. The approach employing a running mean is illustrated with Fig. <ref>. It shows the Kernel Density Estimate of the radius distribution for the hydrodynamic model, including all detectable planets (grey dashed line) as well as planets within three different period intervals. We see how the valley position systematically moves to smaller radii with increasing orbital distance. From the minima in the distributions, we obtain 14 (13) positions for the middle of the valley, for the unbiased and biased population, respectively. These positions are shown with dots in the middle and bottom panels of Fig. <ref>. These panels show 2D Gaussian Kernel Density Estimations of the unbiased (middle) and biased (bottom) population. The impact of the detection bias which removes distant small planets is clearly visible. The two types of planets (super-Earths and sub-Neptunes) and the cliff are also clearly visible in this representation. Finally, we have made least-square fits to these points. They are shown with white lines in the figure. For the unbiased case, we find slopes of α = -0.18 and -0.11 for the energy-/recombination-limited and the hydrodynamic model, respectively. These values are identical to those derived for the rectangular grid (Table <ref>). We thus find that using these very different and more realistic initial conditions does not affect the main result found with the idealised rectangular grid of initial conditions. This indicates that the imprint of the different evaporation models is quite solid, and not strongly dependent on the initial conditions, like for example the assumed post-formation envelope mass. In the biased case, which is the one most directly comparable with observations, we find slopes of -0.16 and -0.10 for the energy-/recombination-limited and the hydrodynamic model, respectively. Applying the detection bias thus makes the slopes slightly less steep for both cases, an effect that should be kept in mind when comparing (unbiased) model predictions and observations, although the difference is tiny (see also ). More importantly, however, these values still compare in the same way to the observed values as was already found with the rectangular grid: the slope found with the hydrodynamic model is in very good agreement with the observed slope (covering a 1–σ range of about -0.13 to -0.07 depending on reference), whereas the energy-/recombination-limited model yields a too steep slope. Thus, applying a detection bias does also not alter the main conclusion of the study based on the idealised rectangular grid. Regarding the absolute position of the valley at 10 days period, we find that the middle of the gap is predicted to be at about 2.3 R_⊕ for both evaporation models (see Table <ref>). As for the rectangular grid, these are larger radii than observed (1.9 ± 0.2 R_⊕ according to ). Thus, our theoretical model seems to overestimate in a general way the strength of evaporation. As already discussed in Sect. <ref>, the presence of a lot of metals as coolants in the atmospheres might explain the difference. In it was found that a metal mass fraction of about Z=0.5 would shift the valley downward by approximately 0.4 R_⊕. This calculation did, however, employ a highly uncertain scaling of the escape rate with Z derived from photoevaporation models of protoplanetary disks <cit.>. Systematic tabulations of atmospheric escape rates found with hydrodynamic models as the one used here but now as a function of Z (represented, e.g., as scaled solar composition) including very high values instead of pure hydrogen would help to further clarify this point. Observationally, future measurements of the atmospheric composition of sub-Neptunes with JWST will also be useful for a better understanding. In the radius histogram including all detectable planets obtained from the model (dashed line in Fig. <ref>), the super-Earth peak is almost three times as high as the sub-Neptune peak. Observationally, they are in contrast of similar height <cit.>. It is not surprising that we get such a discrepancy, because the initial condition distributions we use were derived from an inference analysis utilising another evaporation (forward) model. Modifying the initial conditions would, however, allow to change this ratio: by shifting the core mass distribution to more massive values, a higher fraction of planets would be massive enough to keep a H/He envelope and populate the sub-Neptunian peak. The minimum core mass necessary to keep some H/He at a given orbital distance was analytically derived in (their Eq. 29). § SUMMARY AND CONCLUSIONS In this work, we tested both a simpler XUV-driven energy-/recombination-limited escape model and a complex hydrodynamic escape model <cit.> against a key observational constraint, the valley slope. The latter model includes the boil-off, blow-off, and Jeans escape regimes. The comparison was done by coupling the two escape models to a model for the temporal evolution of planetary interiors. This interior model solves the classical spherically symmetric interior structure equations. With these models, we simulated the evolution of 6000 planets on an idealised rectangular grid in orbital period and mass, and for about 37 000 planets with initial conditions (period, core, and envelope mass) derived from an inference analysis of the Kepler survey planet population <cit.>. We studied the valley locus predicted by the two escape models at 5 Gyr. We find that the hydrodynamic model leads to a valley slope dlogR / dlogp = -0.11 both for the rectangular grid and the unbiased synthetic Kepler planet population. Applying a simple detection bias of the Kepler survey <cit.> leads for the hydrodynamic model to a slope of -0.10. These slopes agree closely with the observational result derived by <cit.> (-0.09^+0.02_-0.04) and <cit.> (-0.11 ± 0.02). As past photoevaporation models, the simple energy-/recombination-limited escape model in contrast predicts a too steep slope of -0.18 for the rectangular grid and the unbiased synthetic Kepler population, and of -0.16 for the biased synthetic population. Regarding the radius of the lower boundary of the valley at a fixed 10-day orbital period, both models yield similar values for the rectangular grid, namely about 1.8–1.9 R_⊕. The too steep a slope in the energy-/recombination-limited escape model is caused by two limitations of this approximation <cit.>, as is found by comparing the escape rates for both individual planets and the entire grid: In particular, it underestimates escape rates for distant, fluffy low-mass planets while simultaneously overestimating it for close-in, compact massive planets. The former is caused by the omission of the boil-off escape regime in the purely XUV-driven energy-/recombination-limited model, while the latter can be explained by its negligence of heat conduction in the atmosphere. Boil-off <cit.> causes a rapid mass decrease in the first few megayears for fluffy planets with a low restricted Jeans-escape parameter. These are planets with considerable thermal energy stored in their atmosphere relative to their gravitational potential. This initial mass loss is significant enough to alter the slope of the valley by evaporating the atmosphere of more massive planets at larger distances. It is interesting to note that the escape rate in the boil-off regime depends via the planetary equilibrium temperature on the stellar continuum irradiation (VIS/IR), and not the XUV irradiation. This is a property it shares with core-driven escape, contrasting the purely XUV-driven energy-/recombination-limited model. Our results suggest that a combination of aspects of both models (namely heating both in VIS/IR and XUV) yield a valley slope agreeing with observations. The second limitation, the negligence of heat conduction in the energy-/radiation-limited approximation produces higher temperatures in the atmosphere than when conduction is cooling the upper atmosphere, as it is the case in the hydrodynamic model. The energy-/radiation-limited model therefore overestimates the temperature, leading to an excessive mass loss rate. This effect is prevalent for massive close-in planets, which are highly XUV irradiated and feature compact atmospheres <cit.>. By including conduction-cooling, the hydrodynamic model predicts lower mass loss rates over time for such planets, leaving lower mass planets still with a H/He envelope, which also alters the valley slope. In combination, the two shortcomings act together in the same direction: the too weak evaporation at larger distances (resulting in smaller evaporated cores) and too strong evaporation at smaller distances (resulting in larger evaporated cores) give the valley a too steep slope in the energy-/recombination-limited model. Our results indicate that the more realistic description of the thermospheric temperature structure in the hydrodynamic model relative to the energy-/recombination-limited model is critical. It allows to reproduce one of the most important observational constrains for escape models, the valley slope. Future work will address the evaporation valley's dependency on host star mass <cit.> and the effect of including metals, which may act as coolants. When compared with observational studies exploring the temporal <cit.> and stellar mass dependency <cit.>, this should allow one to develop an even better understanding of the origin of the radius valley. L.A., A.V.O., and C.M. acknowledge support from the Swiss National Science Foundation under grant 200021_204847 `PlanetsInTime'. Parts of this work have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. Calculations were performed on the Horus cluster at the University of Bern. The research described in this paper was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics Space Administration. § IMPACT OF THE POST-FORMATION LUMINOSITY In this appendix, we evaluate the sensitivity of our results for the valley locus on the post-formation luminosity L_0 or a closely related quantity, the specific entropy s_0 in the inner convective zone. §.§ Parameterization and expected range of s_0 As in , our nominal approach is to assume an initial luminosity L_0 that is given by a fit to results of planet formation population syntheses <cit.>, L_0/L_♃≈ 0.008 ×(M_core/M_⊕)^2.5. In this equation, L_♃ is the intrinsic luminosity of Jupiter today (about 8.7× 10^-10 L_⊙). It is clear that in reality, depending on the particular formation history of a planet, the post-formation luminosity may vary <cit.>. <cit.> for example found a spread in post-formation entropy s_0 in the planet mass range of interest here of about 1 to 1.5 k_B/baryon at fixed envelope mass. These earlier works investigated the post-formation entropy mainly in the context of giant planets and their detectability with direct imaging. More recently, the impact of the post-formation thermodynamic state was also addressed for evaporating planets: <cit.> showed how the post-formation entropies of young evaporating planets might be constrained by observations. <cit.> found that the initial entropies of planets have a minor or even absent effect on most of the population of evolved planets with ages of ∼ 1 Gyr. Only for planets suffering extremely strong atmospheric mass loss, s_0 was found to be of importance. A low importance of the entropy is also expected from the rather weak dependency of the thickness of the convective zone of H/He envelopes on the age and thus entropy <cit.>. Our approach here is to re-run the evolutionary simulations on the rectangular grid of initial conditions with the two evaporation models, but instead of using Eq. <ref>, we cover a wide range of s_0, including the one suggested by more modern formation models. We can then systematically study how s_0 affects the valley location. For this, we generalise the parameterisation of s_0 of <cit.>, s_0 = s_0,n + M_p/25 M_⊕ k_B/baryon. <cit.> fixed s_0,n to 6 k_B/baryon. Here, we generalise this and use s_0,n = 6, 7, 8, and 9 k_B/baryon. Before discussing the results of these grid simulations with different s_0,n, we compare the s_0 obtained in this way with the ones predicted by the recent comprehensive planet population synthesis simulation NGPPS <cit.>. These simulations represent a much improved update to the ones used to derive the original fitting equation <cit.>. These NGPPS results are generated with the Generation III Bern Model <cit.> which is a complex end-to-end formation and evolution model based on the core accretion paradigm. The model solves the 1D internal structure equations during the formation (both attached and detached state), and evolutionary phases. In the luminosity calculation it takes into account the accretion of planetesimals and gas, the cooling and contraction of the envelope, radiogenic heating, as well as giant impacts. The model also takes the concurrent formation of several planets in one disk into account, in contrast to the older <cit.> syntheses, which used the one-embryo-per-disk approximation. This leads to more diverse formation pathways <cit.>. The left panel of Fig. <ref> shows as black dots the entropy at the core-envelope boundary of the planets the NGPPS simulation. The nominal synthetic population NG76 is shown at the moment when the the gas disk dissipates, which corresponds to ages between 1 and 10 Myr. The host star mass is 1 M_⊕. We see that generally speaking, the (mean) entropy is an increasing function of the planet mass. Especially at smaller masses, there is significant spread in s_0. Given the high density of points in this mass range, it is however difficult to get a quantitative picture of the distribution of s_0 from the scatter plot. Thus, in the right panel we additionally show the kernel density estimation of the distribution of s_0 for three mass ranges of interest for our study. We see that the mode indeed increases with mass, lying at about 7.3, 8.0, and 8.3 k_B/baryon for the low, mid, and high mass range. The FWHM is about 1 to 1.5 k_B/baryon in the three cases. This spread is thus similar to the one in the older syntheses. The grey points show the s_0 obtained with the nominal approach (Eq. <ref>). Since we here specify L_0 and not directly the entropy, and given that the atmospheric boundary conditions (in particular the temperature) also affect the relation between L_0 and s_0 (see and ), a range of s_0 occur. The points fall on lines of fixed orbital distance (or equilibrium temperature), with the high s_0 values corresponding to the closest distances. We also see that the majority of the grey points falls into a similar range as also the majority of the black points do. Thus, the simple fit derived from the older population synthesis still seem to capture — at least in a rough way — the new NGPPS results for s_0. The four coloured lines finally show Eq. <ref> with the four values of s_0,n. One sees that the two extreme values (s_0,n = 6 and 9 k_B/baryon) are not representative of the predictions of the formation simulations, but are clearly too low/high in comparison (one should here keep in mind the logarithmic nature of the entropy. It means that a numerically small difference in s_0,n actually corresponds to a very significant change of the gravothermal heat content). As visible from Fig. <ref>, in the formation simulations there are, in particular, virtually no low- and intermediate-mass planets with s_0 as low (high) as 6 (9) k_B/baryon. §.§ An example case Figure <ref> shows the temporal evolution of a specific planet from the rectangular grid for the four s_0,n. The energy-/recombination-limited escape model is used, but qualitatively equivalent effects are also occurring for the hydrodynamic model. This planet has an orbital distance of 0.1 AU, M_core = 20 M_⊕, and M_env,0 = 3.64 M_⊕. The L_0 is 0.34, 12.1, 602.4, and 56104.3 L_♃ for the four s_0 studied. The latter value is certainly extremely high for a planet of this mass <cit.>. Equation <ref> yield for comparison 14.3 L_♃. This planet was chosen because it illustrates with a specific example the two main findings of the grid analysis of the valley location as a function of s_0,n in the next section: namely a weak impact of s_0,n for the three lower entropy values, and envelope overflow for some planets for s_0,n = 9 k_B/baryon. In the left panel we see the radius as a function of time. The initial radius is as expected the larger the higher s_0,n is. This has the well-known consequence (e.g., ) that stronger XUV-driven atmospheric escape will occur for a higher s_0 at young ages, such that at high ages, when the initial s if forgotten, the planet will have a smaller radius because it has a lower mass envelope. For the highest s_0 case, there is, however, an additional effect: the huge initial radius is here larger than R_roche, meaning that some envelope gas is unbound. In the model, we then remove at each timestep one third of the mass outside of R_roche. This factor smaller than unity (which would in principle be the value to use) was chosen for numerical stability. The exact value is, however, inconsequential: in any case, extremely rapid and strong mass loss occurs until the outer radius becomes smaller than R_roche, and only the time duration until this happens varies somewhat with the specific fraction chosen. On the other hand, in a situation of such rapid mass loss like here, quantitative results of our 1D strictly hydrostatic model with a radially constant luminosity at a given time should be regarded with caution. As is visible in the middle panel, this overflow phase removes about one third of M_env,0 on an extremely short timescale which is on the order of just 100 years. At late times, this Roche lobe overflow has the consequence that the planet has a clearly smaller radius (4.05 R_⊕) and mass than the other three cases. For them, the radius varies only between 4.39 and 4.47 R_⊕. The largest radius corresponds to the lowest s_0 because this planet suffered from less “normal” XUV-driven escape because of its higher mean density at young ages, as mentioned. The occurrence of such an overflow phase is indicative of an unrealistic combination of initial conditions for the evolutionary phase in terms of core mass, envelope mass, and luminosity. In reality, during the precedent formation phase, while embedded in the nebula, a core of such a high luminosity (caused for example by a burst of solid accretion) would not posses an envelope of this mass. Instead, potential excess gas would get expelled out of the Hill sphere back into the surrounding disk, and M_env would be lower than assumed here. This effect is by construction not taken into account when s_0 is assumed to be only a function of the total mass, as it is the case both for Eq. <ref> and <ref>. In the formation simulations solving the internal structure equations, this is in contrast automatically taken into account. Thus, whenever possible, s_0 should be estimated in evolutionary models not only based on the total planet mass, but the core and envelope mass separately. Such a prescription for L_0(M_core, M_env) can be found in . The right panel of Fig. <ref> shows the Kelvin-Helmholtz timescale. It is calculated with the actual numerically obtained total energy of the planets and not the approximation G M_p^2/R_p. This approximation can yield very different incorrect values for planets with a very large, extremely tenuous outer envelope, as it is the case here at early times. We see that s_0,n = 9 k_B/baryon corresponds to an extremely small T_KH of less than 10^4 yr. A planet would thus extremely quickly evolve away from such conditions, making it an unlikely state to exist exactly at the moment of disk dissipation. The lowest s_0,n = 6 k_B/baryon on the other hand yields an extremely long initial T_KH∼ 10^10 yr. The radius hardly changes for about 1 Gigayear. Such an extremely cold start seems also unlikely given the energy liberated when accreting a solid core of 20 M_⊕. §.§ Valley locus as function of s_0 The example of this individual planet suggests that the impact of s_0 should be rather small, except if an unexpected high entropy is used. Figure <ref> showing the rectangular grid of simulations indeed reflects a similar pattern. The figure, which can be compared with Fig. <ref> using the nominal L_0, shows for the four s_0,n the radius at 5 Gyr as a function of orbital period, colour-coding the total mass. The hydrodynamic escape model is used. As for the nominal rectangular grid (Sect. <ref>), we have made a least square fit to determine the valley slope α and the normalisation radius at 10 days period, R̃_b. The values for the hydrodynamic model are given in Table <ref>. The result for the energy-/recombination-limited model are similar. The panel in the bottom left corner of Fig. <ref> differs clearly from the other three, which are in contrast similar to each other. We see an absence of planets in the upper right corner. The iso-mass curves visible through the colour-code are significantly shifted, especially at larger orbital distances. The valley slope is also less steep than in other three cases. These differences are the consequence of intense mass loss resulting from Roche lobe overflow right at the beginning of the simulations, as seen in the example case. It affects both planets far from the valley (as in the example), but also planets close to it. An interesting point is here that the maximum radii are limited to about 3.5 to 4 R_⊕ which corresponds to the radius above which observationally the frequency of planets drop strongly (the cliff, ). This suggest that in the very high entropy case, it is not necessary to fine-tune the initial (i.e., post-formation) H/He masses to reproduce the cliff. This echoes the suggestion of <cit.> that the “boil-off” process could be partially responsible for the lack of larger planets. Another small feature visible in Fig. <ref> is the absence of points in the bottom left corner. This is a consequence of the following: for these very close-in, low-mass planets, no structure was found for the requested s_0. Because of Eq. <ref>, these planets have tenuous atmospheres approaching for lower s_0 an isothermal structure. The given equilibrium temperature then excludes certain combinations of core mass, envelope mass, and s_0. This is in contrast to colder and more massive planets with an inner convective zone. As discussed in the previous section, our quantitative results for planets undergoing Roche lobe overflow should be taken with caution, given our model's capabilities. However, this process only affects planets with s_0,n = 9 k_B/baryon, which is not a likely initial condition for low-mass planets. The important conclusion from examining the role of s_0 is rather the following: for the relevant, very wide range of s_0,n from 6 to 8 k_B/baryon, the impact of the post-formation entropy on the final valley slope is only very small, as can be seen in Table <ref>. The slope α hardly changes with values between -0.113 and -0.117. This is also the same as found for the nominal L_0. We do see that R̃_b shifts to higher values with increasing s_0 as expected, but the shift from 1.81 to 1.90 R_⊕ is rather small. This is especially the case when one considers that formation models predict a spread of about only 1, and not 3 k_B/baryon at fixed planet mass. To summarise, we find that varying s_0,n over a wide range of 6 to 8 k_B/baryon has virtually no impact on the valley slope, and shifts R̃_b only by a rather small increment. This range of initial entropies includes those suggested by formation models and leads to stable initial conditions where the initial planet radius is smaller than the Roche lobe. Only when using a for this mass range unrealistically high s_0,n = 9 k_B/baryon, the impact becomes significant, because mass loss via Roche lobe overflow occurs immediately at the beginning of the simulations. Such unstable initial conditions are, however, not predicted by formation models.
http://arxiv.org/abs/2307.01309v1
20230703192618
Social Impressions of the NAO Robot and its Impact on Physiology
[ "Ruchik Mishra", "Karla Conn Welch" ]
cs.RO
[ "cs.RO" ]
Social Impressions of the NAO Robot and its Impact on Physiology Ruchik Mishra Department of Electrical and Computer Engineering University of Louisville Louisville, USA [email protected] Karla Conn Welch Department of Electrical and Computer Engineering University of Louisville Louisville, USA [email protected] Received ; accepted ============================================================================================================================================================================================================================================================================================================== The social applications of robots possess intrinsic challenges with respect to social paradigms and heterogeneity of different groups. These challenges can be in the form of social acceptability, anthropomorphism, likeability, past experiences with robots etc. In this paper, we have considered a group of neurotypical adults to describe how different voices and motion types of the NAO robot can have effect on the perceived safety, anthropomorphism, likeability, animacy, and perceived intelligence of the robot. In addition, prior robot experience has also been taken into consideration to perform this analysis using a one-way Analysis of Variance (ANOVA). Further, we also demonstrate that these different modalities instigate different physiological responses in the person. This classification has been done using two different deep learning approaches, 1) Convolutional Neural Network (CNN), and 2) Gramian Angular Fields on the Blood Volume Pulse (BVP) data recorded. Both of these approaches achieve better than chance accuracy (>25%) for a 4 class classification. social impressions, NAO robot, classification, CNN, Gramian Angular Field, physiological signal processing § INTRODUCTION Autism Spectrum Disorder (ASD), as an umbrella term, has been associated with challenges in social communication and interaction along with restrictive and repetitive behaviors according to the Diagnostic Statistical Manual <cit.>. The world ASD population is around 1.5 percent <cit.>, with 1 out of 54 children being diagnosed in the United States alone <cit.>. More recently, this number has increased by 104% <cit.>. Since, there is no cure for autism <cit.>, there exist numerous treatments/interventions, as presented by the authors in <cit.>, not all of which are scientifically proven to have positive results <cit.>. To mitigate the scientific shortcomings of some of these studies, Evidence Based Practices (EBP) have presented numerous interventions that have been chosen as a result of positive scientific evidence <cit.>. Among them, Technology-Aided Instruction and Intervention (TAII) has been on the rise since a few decades now as is evident from the works of <cit.>, some of which have also been used to form the guidelines for EBP like <cit.>. Apart from following the EBP, it is also essential to take into account the acceptability of the robot in a therapeutic scenario that involves a human-robot interaction (HRI). Due to the heterogeneity of the ASD population, these responses can vary based on the individual <cit.>. In addition to considering the social acceptability of the robot, factors such as repeated exposure to a robot <cit.>, features of robots to be considered in therapy <cit.>, anthropomorphic appearance and intonation <cit.> etc. have also been studied. The reason for these studies can be attributed to the fact that these interventions are directed towards bringing about positive effects in the individual with ASD. Another aspect that can be associated with social acceptability of the robot during therapy is the uncanny valley, as has been studied by the authors in <cit.>. Since the human-like traits of a robot can stir discomfort in people surrounding it; this phenomena, introduced in <cit.>, has been explored using different modalities of HRI <cit.>. Since the effect of the uncanny valley has been shown to be more profound on the ASD population than on neurotypicals <cit.>, it becomes imperative to validate individual communication modality in a robotic intervention to be analyzed on the neurotypical population first. Since the uncanny valley is associated with the features of the robots in terms of the appearance and its anthropomorphic appeal, it can have an effect on the affective states of the individual <cit.>. In this work, we have presented a one-to-one HRI as shown in Figure <ref> with neurotypical adults. We start with the use of the humanoid NAO robot for a neurotypical population with four different conditions (A-D) (see Section <ref> for more details). The participants' prior experience with robots has also been considered in this study. We have analysed the response of the participants on five pf of the robot namely, perceived safety, anthropomorphism, likeability, animacy, and perceived intelligence. We have analysed if the different conditions (A-D) have any effect on the users' perception of the five perceived robot features. In addition, we have compared the effects of varying amounts of prior experience with robots among the participants on pf under the four conditions (A-D). Further, the effect of these conditions on the physiological signals have been differentiated using deep learning algorithms. This paper has been arranged in the following way: Section <ref> describes the methodology used in this paper followed by the results in Section <ref>. Further, Section <ref> outlines the future questions we would like to address in our study followed by the conclusion in Section <ref>. § METHODOLOGY §.§ Data acquisition We have considered 30 neurotypical adults for this study. The mean age of all the participants was 21.4 years with a standard deviation of 3.36 years. Out of the 30 adults, 43% were male participants and the rest were female participants. During the sessions, the participants' audio, video and physiology was recorded. The physiological signals were recorded using the Empatica E4 wrist band. For the purpose of this study, just the BVP data was used for our deep learning based classification approaches. Before conducting this study, it was approved by the University's Institute Review Board (IRB). In addition, consent was taken from all the participants before the activity. There was no compensation involved for the participants involved in this research. §.§ Impressions of the NAO robot The participants were given a set of pre-defined questions to use to interact with the NAO robot. The modalities of the NAO robot were tested under four different scenarios: * Default NAO voice with smooth motions (Condition A), * Amazon Polly Justin voice [26] with smooth motions (Condition B), * Default NAO voice with jerky motions (Condition C), * Amazon Polly Justin voice with jerky motions (Condition D). The participants' reactions to their impressions of the NAO robot were then assessed through the Godspeed and Robotic Social Attributes Scale (RoSAS) <cit.> questionnaires across five categories: 1) perceived safety (PS), 2) anthropomorphism (AP), 3) animacy (AM), 4) likeability (LK) and, 5) perceived intelligence (PI), similar to the authors in <cit.>. The scores for these categories were recorded after each of the robot conditions (A-D). Since the participants had varying backgrounds in terms of prior experience with robots in their personal or professional lives, this difference in experience with robots has been accounted for while assessing perceived safety, anthropomorphism, animacy, likeability, and perceived intelligence. The experience of a participant with robots was recorded on a scale of 0-3, where 0 stands for no familiarity whereas 3 stands for intermediate level of familiarity (e.g., completed projects with robots). All of these statistical inferences were made using the one-way Analysis of Variance (ANOVA) <cit.> as has been discussed in later sections. §.§ Effect on Physiology Since the BVP data is recorded during HRI, we use it to examine whether these physiological responses are differentiable using our deep learning algorithm. The BVP data is collected using a windowing approach. Since we use two different approaches for classification from the literature, the problem formulation for both of them have been explained. §.§.§ Problem formulation The first approach comprises of just CNNs for time series classification of the BVP signal. Given the univariate BVP signal data, 𝒳 = {x_1, x_2… , x_n}, split the time series into windows as has been shown in equation f_split (𝒳) = 𝒴 ⇒ g_i(x) = {x_i*j, … , x_i*j + p}→𝒴_i where i is the number of training data point made from splitting the time series each into p steps, j denotes the stride length we used, and 𝒴 denotes the labels w.r.t. the conditions A-D. This basic formulation is used to find the multivariate function g(x) using our proposed Deep Learning approaches. §.§.§ Deep Learning for classification The first approach used for our classification problem is based on CNNs on the raw time series signal which has been split into smaller windows. The network architecture has been shown in Figure <ref>. For the second architecture, instead of using raw signals as inputs, we convert the split signals from equation <ref> into images based on Gramian Angular Fields <cit.>. In this work, we use the Gramian Angular Field Difference (GADF) as defined in<cit.> as: GADF_i = (√( I -X̃^2_[i*j:i*j + p]))^T . X̃_[i*j:i*j + p] - (X̃_[i*j:i*j + p])^T. √( I -X̃^2_[i*j:i*j + p]) where X̃ are the polar coordinates of the time-series BVP signal 𝒳, and I is the unit vector <cit.>. The definition of the polar coordinate X̃ can be expressed as in <cit.>: {x̃_i| x̃_i∈X̃_i, -1 ≤x̃_i≤ 1, (x̃^i_-1, x̃^i_0) = ( (x_i-max(𝒳)+x_i-min(𝒳))/max(𝒳) - min (𝒳) , x_i - min(𝒳) /𝒳 - min(𝒳) ) } , where x_i∈𝒳 In the above equation, the values of the BVP signal 𝒳 can be scaled between [-1,1] as has been mentioned in the expression of x̃^i_-1 or between [0,1] as has been mentioned by the expression of x̃^i_0. § RESULTS AND DISCUSSIONS §.§ Effects of Conditions (A-D) on user experience §.§.§ Test for equal variances The first step is to check the null hypothesis (H_o) in equation <ref> for each of the features: PS, AP, AM, LK and, PI. H_o: σ_A^2 = σ_B^2 = σ_C^2 = σ_D^2 H_1: ∃σ_i≠σ_j for i≠ j where i,j = {A,B,C,D} From the p-values, obtained using Bonferroni 95% confidence intervals for standard deviations, we can reject the null hypothesis for PS, LK, and AM (p-value≤ 0.05) which means that the variances for these groups are different <cit.>. On the other hand, we accept the null hypothesis for anthropomorphism and perceived intelligence (p-value> 0.05), which means that for each of these features, we can consider the variances of conditions A-D to be equal. §.§.§ ANOVA for each perceived feature For this we consider the data of all the 30 participants included in this study irrespective of their previous exposure to robots. We use Welch's one-way ANOVA <cit.> for perceived safety, likeability, and animacy since we do not assume equal variances as was shown from the results in Figure <ref>. We compare the difference between conditions A-D by making a hypothesis similar to that in equation <ref>, but this time with the means of the values: H_o: μ_A = μ_B = μ_C = μ_D H_1: ∃μ_i≠μ_j for i≠ j where i,j = {A,B,C,D} The main effects plot for each of the perceived features has been shown in Figure <ref>-<ref>, and the p-values for them have been shown in Figure <ref>. From the p-values it can be seen that for perceived safety, anthropomorphism, likeability and animacy, we can reject the null hypothesis (H_o) since the p-value ≤0.05, whereas we accept H_o for perceived intelligence (p-value >0.05). This means that under different conditions A-D, a user has different levels of perception of PS, AP, LK, and AM; but the perceived intelligence of the robot remains unaffected. From the main effects plot in Figures <ref>-<ref>, we can also conclude that: * Perceived safety: B>A>C>D. This means that the modality of the Amazon Polly Justin voice of the NAO robot with smooth motions was perceived to be the safest among users. * Anthropomorphism: A>B>D>C. The users perceived the NAO robot's default voice with smooth motions to be most anthropomorphic (human-like). * Likeability: A>C>B>D. The participants liked the NAO's default voice with its smooth motions most, similar to the case of anthropomorphism. * Animacy: A>B>C>D. Condition A still has the most preference for animacy as compared to the other conditions. §.§ Effects of Experience with robots In this section, we compare the effects of varying robot experiences for each of the significantly perceived features (p-value ≤ 0.05) evaluated from the previous section (i.e perceived safety, anthropomorphism, likeability, and animacy). We have considered 4 levels of robot experiences: * 0: No experience/never heard of robots before today * 1: Fundamental awareness (basic knowledge/awareness of robots) * 2: Novice (limited direct experience with robots) * 3: Intermediate (completed practical applications with robots) We use a similar hypothesis as in equation <ref> for finding the effect of robot experience on the perceived features: H_o: μ_0 = μ_1 = μ_2 = μ_3 H_1: ∃μ_i≠μ_j for i≠ j where i,j = {0,1,2,3} The details of test of variances, p-values for all combinations of conditions and features have been shown in Table <ref>. It can be seen that prior experience with robots does not have any effect on the participants under any of the conditions (A-D) as far as perceived safety of robots during the interaction is concerned. The same holds for animacy too. On the other hand, under some conditions (A-D), anthropomorphism and likeability of the robot depend on prior experience with robotics has been highlighted in Table <ref>. The main effects plot of these significant values have been shown in Figure <ref>. Based on these graphs, we conclude that for condition A, novice participants (level-`2') found the robot to be more anthropomorphic than the other participants. Similarly, participants with experience level-`3' found the robot more likeable for conditions A, C, and D. §.§ Effects on Physiological response From equation <ref>, we can see that the sliding window approach has j strides in between two consecutive windows. The approach has been adopted from the authors in <cit.> where an eight-second sliding window is used with a two-second stride length. We analyzed the effect of different stride length on the train, test, and validation accuracy of both of our models. All the code for these deep learning models was run on Google Colab's premium version. The learning rate used for each of these simulations was 0.001 with Adam optimization and accuracy as the metric. The framework used for this work was Tensorflow. §.§.§ CNN model with raw BVP signal The effective length, as defined by the equation in <ref>, has been used as the reference for inspecting different lengths of the sliding window and strides. Effective length = j/p As can be seen from Figure <ref>, the effective length of 0.25 (Figure <ref>) fetched the maximum test accuracy (0.8179) followed by the effective length of 0.39 (Figure <ref>) with a test accuracy of 0.6767 and effective length of 0.5 with a test accuracy of 0.5831. In addition, the validation accuracies are also in agreement with the test accuracies for each of the effective length values. Hence, as the effective length increases, the validation and test accuracies of the model decreases. §.§.§ CNN model with Gramian Angular Field The training and test accuracy for this case has been shown in Figure <ref>. As can be seen from the low test accuracies of using the Gramian Angular Field, the effective length did not have any influence on the test accuracies. This can be attributed to the trend stationarity of the BVP signal <cit.>. §.§.§ Proof of trend stationarity of BVP signal We have used the Augmented Dickey-Fuller (ADF) test <cit.> and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) Test <cit.> for testing the trend stationarity of the BVP signal. Equation <ref> shows the hypothesis as suggested in <cit.>. H_o: Time series has a unit root H_1: Time series has no unit root For ADF test, we can reject the Null hypothesis (H_o) as p-value =0.1>0.05, whereas in case of KPSS test, we accept H_o (p-value =0<0.05). Hence, the BVP signal is trend-stationary <cit.> which is consistent with the observations by authors in <cit.>. § LIMITATIONS AND FUTURE WORK For the user acceptance part of this paper, we would like to further test the impressions of children with ASD to the robot's modalities for comparing the perceived features of the NAO robot. In addition, we would like to expand the capability of the NAO robot to be able to understand and respond to candid conversations rather than pre-defined questions which is aligned with the aims in <cit.>. For the classification problem described, we would like to include the use of autoencoders for denoising the BVP signal <cit.> towards making an end-to-end deep learning pipeline for classification. § CONCLUSION In this paper, we present a study that involves the use of the humanoid NAO robot for a neurotypical population with four different conditions (A-D) of varying voice types and motions during HRI (see Section <ref> for more details). The participants' prior experience with robots has also been considered in this study. We have analysed the response of the participants on five perceived features (pf) of the robot namely, perceived safety, anthropomorphism, likeability, animacy, and perceived intelligence. We have analysed if the different conditions (A-D) have any effect on the users' perception of the five perceived robot features. In addition, we have compared the effects of varying amounts of prior experience with robots among the participants on pf under the four conditions (A-D). In the end, we demonstrated the effect of these conditions on the physiology (here BVP) of the participants. Based on the performance of our deep learning approach, we were able to classify the physiological responses of the participants under different conditions with more than chance (>25%) accuracy. Between the two approaches used for classification, using the raw signals with a CNN model (test acc: 0.8179 %) worked better than using GAF (test accuracy: 0.59%) attributing to the trend stationarity of the BVP signal. § ETHICAL IMPACT STATEMENT As described in Section <ref>, prior IRB approval was taken before conducting this study. Informed consent was taken from all the participants. In addition, they had an option to opt out of the study at any stage. The data collected during this study had no gender or racial bias. We had a close to equal male to female ratio (Male= 43%, Female= 57%). However, efforts were not made to ensure cross cultural data collection. Additional research would be needed to address the social impressions of the NAO robot across cultures. For the deep learning model used, the generalization is performed across the subject data collected currently. Although, only BVP data was used as a physiological marker for differentiating between the conditions A-D. This distinction might not be generalizable across different physiological signals like Electrodermal Activity (EDA) or temperature. § ACKNOWLEDGMENT The authors wish to acknowledge undergraduate researchers for their assistance with collecting the data. We also want to thank the adult subjects and the staff at the university-affiliated autism center. plain
http://arxiv.org/abs/2307.00342v1
20230701134515
Improving Multitask Retrieval by Promoting Task Specialization
[ "Wenzheng Zhang", "Chenyan Xiong", "Karl Stratos", "Arnold Overwijk" ]
cs.CL
[ "cs.CL", "cs.IR" ]
Sparse-Input Neural Network using Group Concave Regularization Bin Luo [email protected] Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA Susan Halabi [email protected] Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA August 1, 2023 =================================================================================================================================================================================================================================================================================================== In multitask retrieval, a single retriever is trained to retrieve relevant contexts for multiple tasks. Despite its practical appeal, naive multitask retrieval lags behind task-specific retrieval in which a separate retriever is trained for each task. We show that it is possible to train a multitask retriever that outperforms task-specific retrievers by promoting task specialization. The main ingredients are: (1) a better choice of pretrained model—one that is explicitly optimized for multitasking—along with compatible prompting, and (2) a novel adaptive learning method that encourages each parameter to specialize in a particular task. The resulting multitask retriever is highly performant on the KILT benchmark. Upon analysis, we find that the model indeed learns parameters that are more task-specialized compared to naive multitasking without prompting or adaptive learning.[Our code and model checkpoints are publicly available at <https://github.com/WenzhengZhang/TACO>.] § INTRODUCTION A standard approach to knowledge-intensive language tasks such as question answering (QA), entity disambigution, and fact verification is retrieval-based. Given an query, a retriever is used to efficiently search a large knowledge base (KB) to retrieve relevant “contexts”, typically in the form of short paragraphs. How these contexts are used is task-specific (e.g., entity disambiguation takes the title of the article in which the top retrieved context is found; QA predicts an answer from the contexts by through a reader model). In this paper, we focus on the retrieval step. In particular, we focus on multitask retrieval. In this setting, there are K > 1 downstream tasks that benefit from retrieval from a shared KB. A single retriever is then tasked with performing retrieval for K tasks. Multitask retrieval contrasts with task-specific retrieval in which a separate retriever is trained for each task, and has compelling advantages such as model simplicity (i.e., we can use the same model for all tasks rather than having to design potentially different models for different tasks) and memory efficiency at test time (K times smaller). Despite the practical appeal, the performance of multitask retrieval has been underwhelming, severely limiting its real-world applicability. Specifically, previous work by <cit.> trains DPR <cit.> on the union of all training datasets in the KILT benchmark <cit.>, but the model is outperformed by task-specific retrievers in 5 out of 8 tasks (page-level R-precision, validation split). In our experiments, we find that it is in fact outperformed in all tasks (often by substantial margins) when a stronger task-specific baseline is used. This result is surprising as well as disappointing given the usual benefits of multitask learning (e.g., data efficiency, reduced overfitting) when properly done. We debunk the previous negative result by presenting a multitask retriever that outperforms task-specific retrievers. The main theme of our work is that it is beneficial to explicitly promote task specialization. A first important source of improvement is a better choice of pretrained model, one that is explicitly optimized for multitasking. Specifically, instead of the standard retrieval encoder BERT <cit.>, we use T5 <cit.> which includes multitasking in its pretraining stage. Importantly, we use the same prompting as in pretraining (i.e., task indicator) to reduce the gap between pretraining and finetuning for multitask retrieval. A second source of improvement is a novel adaptive learning method in which we adatively upweight the task gradients by the parameter's sensitivity to these tasks to encourage task specialization. The resulting multitask retriever is highly performant on the KILT benchmark. We achieve 73.74% average page-level R-precision on KILT validation data and 72.84% average page-level R-precision on KILT test data. Upon analysis, we find that the model indeed learns parameters that are more task-specialized compared to naive multitasking without prompting or adaptive learning. § RELATED WORK <cit.> propose multitask retrieval largely as an extension of DPR. Their best model is a BERT-based dual encoder trained on the union of 8 retrieval tasks. While it performs comparably with task-specific DPRs on some tasks, it generally lags behind. In this work, we use stronger task-specific retrievers based on T5 and ANCE <cit.> all of which substantially outperform their multitask retriever. We argue that this negative result undermines the case for multitask retrieval and that it is crucial to demonstrate competitive performance. Our main contribution is producing this demonstration. We emphasize that achieving competitive multitask retrieval in practice is a highly difficult empirical problem. One might think that it is simply an application of multitask learning, which has no shortage of sophisticated techniques. These techniques typically modify the gradients during training, such as gradient surgery <cit.>, gradient vaccine <cit.>, common gradient descent <cit.>, and GradNorm <cit.>. We experiment with these techniques and find that they do not help, thus motivated to develop one that works. Our technical contribution is a new method for multitask learning based on the notion of task sensitivity. Given a loss function J(θ), the sensitivity of the i-th parameter to the loss at θ is defined as the absolute change in the loss when θ_i is set to zero, which can be approximated by a first-order Taylor approximation as J(θ) - J(θ_-i)≈∂ J(θ)/∂θ_i×θ_i where θ_-i is equal to θ except that its i-th element is zero. This quantity has been used in the context of model pruning—as a way of identifying weakly sensitive weights <cit.> and updating them more aggresively <cit.>. In contrast, we use the quantity to identify weights that are strongly sensitive to a particular task and increase their sensitivity even further, intuitively to achieve per-parameter task specialization. To our knowledge, we are the first to use parameter sensitivity for multitasking learning. We briefly differentiate our work from other recent works on multitask retrieval. <cit.> present CorpusBrain, an autoregressive multitask retriever trained in largely the same style as GENRE <cit.> with excellent performance. Autoregressive retrieval has different pros and cons compared to dense retrieval which is our setting; it can be more memory and runtime efficient, but it does not “read” the description of the target and thus not suited for retrieval tasks that require involved reasoning over query-target pairs (e.g., zero-shot entity retrieval <cit.>). Thus we consider the contribution of CorpusBrain to be at least partially orthogonal to ours. Nevertheless, we show that our model outperforms CorpusBrain in a similar training setting in experiments. <cit.> propose instruction-based retrieval in which the retriever is given an intent as well as a query to find the intended target. While this is a form of multitask retrieval, the problem formulation is different and it is evaluated on its own dataset benchmark. § METHOD We build on the well-established framework of dual encoder <cit.>. Let 𝒳 denote the set of all queries and 𝒴 the set of all targets (i.e., KB). First, we assume mappings text_X: 𝒳𝒱^+ and text_Y: 𝒴𝒱^+ where 𝒱 denotes the vocabulary to “verbalize” queries and targets. Second, we assume encoders ^θ_X, ^θ_Y: 𝒱^+ ^d with parameters θ defining the relevance score function _θ(x,y) = ^θ_X(text_X(x))^θ_Y(text_Y(y)). Third, assuming iid samples (x_1, y_1) … (x_N, y_N) ∼, we learn the parameters by noise contrastive estimation (NCE): min_θ -1/N∑_i=1^N logexp(_θ(x_i,y_i)) /∑_y ∈𝒴_iexp(_θ(x_i,y) ) where 𝒴_i ⊂𝒴 satisfying y_i ∈𝒴_i is a set containing the gold and negative targets for the i-th labeled example. We pre-encode every y ∈𝒴 to v_y = ^θ_Y(text_Y(y)) at test time and efficiently compute the highest scoring target ŷ(x) = _y ∈𝒴 ^θ_X(text_X(x))v_y for any x ∈𝒳 by maximum inner product search. In multitask retrieval, there are K retrieval tasks each with N_k training examples (x^(k)_1,y^(k)_1) … (x^(k)_N_k,y^(k)_N_k)∼_k drawn iid from the k-th population distribution _k. We use the KILT benchmark which includes K=8 tasks addressing QA, entity linking, fact checking, slot filling, and dialogue.[We write “task” and “dataset” synonymously instead of distinguishing datasets from task types as done in some previous works. Thus KILT has 8 tasks and 5 task types.] The per-task loss is J_k(θ) = -1/N_k∑_i=1^N_klogexp(_θ(x^(k)_i,y^(k)_i)) /∑_y ∈𝒴^(k)_iexp(_θ(x^(k)_i,y)) defining the final loss J(θ) = ∑_k=1^K N_k/N× J_k(θ) Previous work by <cit.> use the following setting. The KB 𝒴 consists of 100-token disjoint Wikipedia passages. The text mappings text_X,text_Y apply the BERT tokenizer to unmodified queries and passages. The encoders ^θ_X, ^θ_Y are initialized with independent pretrained BERT-bases (uncased). The task-specific training datasets are downsampled to be of similar sizes. As in DPR, they train the model using hard negatives based on BM25, followed by one round of hard negative mining from the model (only on Natural Questions and TriviaQA in which verifying if a candidate negative is indeed incorrect is expedient). We now describe the main sources of improvement that we achieve over the baseline multitask retriever: a better choice of the base model with appropriate prompting, and better optimization. §.§ Base Model We use a shared T5 to parameterize and initialize the query and passage encoder ^θ = ^θ_X = ^θ_Y. Specifically, we follow the ST5-EncDec architecture <cit.> which encode any z ∈𝒱^+ as ^θ(z) = T5.generate(z, length=1).state (i.e., we run the T5-encoder on z, run the T5-decoder for 1 step from the special start symbol, and take the resulting hidden state prior to token prediction). In addition, we define the text mapping for queries x ∈𝒳 in task k as text_X(x) = T5Tokenizer(π_k ⊕[SEP]⊕ x) where ⊕ is string concatenation, [SEP] is the special separation token, and π_k is a text prefix that indicates which task x is a query of. We use dataset names as prefixes (e.g., π_1 =“NQ”). The text mapping for passages y ∈𝒴 does not use prefixes, that is text_Y(y) = T5Tokenizer(y) This allows us to pre-encode passage embeddings at test time and retain the efficiency of the single-task dual encoder framework. While simple, this choice is the most crucial component in our apporach to improving multitask retrieval. We take a model pretrained for multitasking and adopt the same prefix concatenation scheme for task adaptation, treating multitask retrieval as a continuation of the T5 training. Interestingly, using task markers is reported to be not helpful in <cit.>. This is likely because their base model, BERT, is not pretrained for multitasking. Another difference is that they use task markers to indicate the 5 task types (e.g., “QA”), whereas we use fine-grained markers to indicate the 8 tasks (e.g., “NQ”). While there are previous works that use T5 for dense retrieval <cit.>, we are the first to exploit the multitasking component of T5 pretraining for multitask retrieval. §.§ Adaptive Learning For the k-th task, the linear approximation of J_k(θ) around a ∈^d is J_k(θ) ≈ J_k(a) + ∇ J_k(a)θ - a Let θ^(t) denote the parameter value at the t-th update in gradient-based training. For any i = 1 … d, we define θ^(t)_-i to be equal to θ^(t) except that its i-th element is zero. The approximation of J_k(θ) around a = θ^(t)_-i at θ = θ^(t) is J_k(θ^(t)) ≈ J_k(θ^(t)_-i) + ∇ J_k(θ^(t)_-i)θ^(t) - θ^(t)_-i = J_k(θ^(t)_-i) + ∂ J_k(θ^(t))/∂θ_i×θ^(t)_i Rearranging and taking the absolute value, we have _i,k^(t) = ∂ J_k(θ^(t))/∂θ_i×θ^(t)_i ≈J_k(θ^(t)) - J_k(θ^(t)_-i) which is easily computable and can be viewed as measuring how sensitive the i-th parameter is with respect to the k-th task in the t-th iteration of training. We propose to use this quantity, previously used in the model pruning literature <cit.>, to encourage task specialization during training. We define a conditional distribution over K tasks by q(k|θ^(t), t, i) = exp(_i,k^(t) / τ_t)/∑_k=1^K exp(_i,k^(t) / τ_t) where τ_t > 0 is a temperature and _i,k^(t) is an appropriately normalized and amortized estimation of _i,k^(t) in Eq. (<ref>) (see Section <ref>). Assuming training examples are sampled to roughly balance the size across tasks (i.e., N_k ≈ N_k'), we take the following gradient step for the i-th parameter in the t-th iteration: θ^(t+1)_i = θ^(t)_i - η∑_k=1^K q(k|θ^(t), t, i) ×∂ J_k(θ^(t))/∂θ_i Note that this is a per-parameter adaptive learning. Each parameter θ_i ∈ maintains a distribution over K tasks and is updated more aggresively for tasks that θ_i is sensitive to. §.§.§ Sensitivity normalization The d parameters θ^(t) can be of very different magnitudes. To reduce the parameter-wise variance in the sensitivity scores, for task k we divide the scores by the median of across all parameters with respect to task k: _i,k^(t) = _i,k^(t)/median_j=1 … d(_j,k^(t)) We use the median instead of the mean to account for the long tail distribution of task-specific sensitivity scores. We also use momentum to amortize the scores: assuming some > 0 _i,k^(t) = (1-) _i,k^(t-1) + _i,k^(t) where _i,k^(0) = 0. This is the final version of sensitivity that we use in Eq. (<ref>). The algorithm in matrix form is given in Algorithm <ref> (Appendix <ref>). § EXPERIMENTS §.§ Setup Datasets. We follow <cit.> and use eight tasks from KILT <cit.> for training and evaluation. We randomly downsample the training data of the two largest datasets (T-REx and zsRE) to the same order of magnitude as the rest. All the datasets share the same knowledge base of 36 million disjoint 100-token Wikipedia passages preprocessed by <cit.>. The data statistics and other data-related details can be found in Appendix <ref>. Evaluation. We use the page-level R-precision (the suggested main metric in KILT) to measure the retrieval performance. Page-level R-precision is the fraction of the R gold pages captured by the retriever in the top-R candidates. We map the retrieved passages to the their corresponding pages and use official KILT evaluation scripts to evaluate the page-level R-precision. We also report passage-level R-precision proposed by <cit.> on dev sets in Appendix <ref>. We use TREC Eval[<https://trec.nist.gov/trec_eval/>] to evaluate the passage-level R-precision. Model details. We initialize our dual encoder with the official T5-base <cit.> checkpoint. The query encoder and passage encoder share weights. Following the ANCE <cit.> training paradigm, we first warmup our model for 20 epochs with BM25 hard negatives by naive multitask learning with task prefix. Then we train the model for 8 ANCE episodes with the model-mined hard negatives refreshed at the begining of each ANCE episode. We adopt naive multitask learning with task prefix for the first 7 ANCE episodes and apply the adaptive learning introduced in Section <ref> for the last ANCE episode to improve the performance further. We use Adam <cit.> with a linear learning rate decay schedule with warmup proportion 0.1 over 3 epochs for each ANCE iteration. We provide more details and hyperparameters in Appendix <ref>. §.§ Main Results We refer to our model as TACO, which stands for TAsk speCialty Optimization. Table <ref> and Table <ref> show our main results on the KILT validation data and test data respectively. Fewer comparable baselines are available for KILT test data than for KILT validation data. Let avg val denote average validation page-level R-Precision. TACO achieves the best performance on 4 out of 8 tasks for both validation and test data. The performance is either the second best or close to the second best except AIDA, an entity linking dataset favoring autoregressive retrieval models over dense retrieval models <cit.>. TACO outperforms the previous multitask dense retrieval model MT-DPR <cit.> significantly (+7.34% avg val). TACO also achieves better performance compared with current top performing multitask autoregressive retrieval models in comparable setting (finetuned purely on KILT). TACO outperforms BART_mt (+3.33% avg val) with smaller model size (T5-base vs Bart-large). Compared with BART_mt, CorpusBrain_mt employs additional pretraining and yields significant improvement over BART_mt (+2.15% avg val). TACO still outperforms CorpusBrain_mt (+1.18% avg val) with smaller model size and no additional pretraining. We also list various top performing multitask retrieval models for reference but not for comparison because they are not in comparable setting. Both GENRE and CorpusBrain_mt+BLINK are finetuned on a large amount of additional training data besides KILT training data. Specifically, they also use BLINK training data <cit.> for finetuning, which contains 8.9M annotated wikipedia sentences. TABi <cit.> uses extra type labels information and leverages knowledge graph that is very effective for retrieval. TACO even rivals these non-comparable models on all the tasks except AIDA. TACO is the only model that outperforms strong task-specific models noticeably. Our task-specific baseline is significantly stronger than the task-specific DPR, likely due to better training paradigm (ANCE) and better model (T5 vs BERT). Task-specific CorpusBrain is even stronger, especially for FEVER and AIDA. Only TACO and CorpusBrain_mt outperform the strong task-specific models. TACO achieves a 2.36% improvement over its task-specific counterpart and a 1.41% improvement over the task-specific CorpusBrain, but CorpusBrain_mt is only slightly better than its task-specific counterpart (+0.23% avg val). §.§ Analysis §.§.§ Ablation Study Table <ref> shows the results of ablation studies on KILT validation data. Model components. We first conduct experiments to understand the impact of individual components of our model. Removing task prefix results in 1.62% R-precision decrease and disabling adaptive learning yields 1.08% R-precision decrease. Removing both task prefix and adaptive learning significantly degrades the performance (-3.13%). This demonstrates that both task prefix and adaptive learning contribute to the effectiveness of TACO. Query variants. We conduct experiments to investigate other query side variants besides task prefix. These variants are not trained with adaptive learning and only change the query input format or model. Leveraging task-specific query encoder yields slightly better performance (70.87% vs 70.61%), but is outperformed by task prefix significantly (70.87% vs 72.66%). The task type marker introduced in <cit.> is not helpful for BERT-based MT-DPR, but we find them effective for our T5-based model. This is likely because T5 is pretrained for multitasking. We conduct experiments to leverage their task type markers for our model. Using task type markers (i.e., 5 symbols indicating the 5 classes of task in KILT) leads to 1.24% R-precision improvement (71.85% vs 70.61%), but is less effective than our fine-grained dataset-level task prefix (71.85% vs 72.66%). Mutltitask learning variants. We compare our adaptive learning method with recent general multitask learning algorithms with our own implementation. PCG <cit.> focuses on mitigating the conflict of gradients from different tasks. It performs on par with the “w/o adaptive” variant (72.47% vs 72.66%), but underperforms TACO which leverages our adaptive learning (72.47% vs 73.74%). This shows that the gradient conflict is not the main bottleneck in our multitask retrieval setting. CGD <cit.> aims to improve multitask learning by encouraging update towards common directions of different tasks, which is opposite to our method that encourages task specialties. It performs much worse than TACO (69.51% vs 73.74% and lags behind the “w/o adaptive” variant significantly (69.51% vs 72.66%). This shows that we should encourage task specialty rather than emphasizing tasks shared part for multitask retrieval. GradNorm <cit.> tries to weight different tasks losses by using the average gradient norm. It performs slightly better than the naive “w/o adaptive” variant (72.47% vs 72.66%). Our adaptive learning method achieves descent improvement over GradNorm (73.74% vs 72.80%). Note that our adaptive update is more fine-grained and critically different because we adjust learning rates along both task dimension and parameter dimension compared with GradNorm that only do loss re-weighting. Adaptive learning. We consider variations of the main version of adaptive learning which is applied only in the last ANCE episode. Specifically, we investigate the impact of applying adaptive learning to the last four ANCE episodes using an exponential softmax temperature decay scheduler. This approach yields an average page-level R-precision of 73.47%. In comparison, when adaptive learning is applied only to the last ANCE episode, we achieve an average page-level R-precision of 73.74%. These results suggest that extending adaptive learning to more ANCE episodes does not yield improvement. Additionally, we examine the effectiveness of encouraging task specialization within adaptive learning. For this purpose, we focus on the second ANCE episode and experiment with positive softmax temperature (encouraging task specialty) and negative softmax temperature (discouraging task specialty). Encouraging task specialization results in an average page-level R-precision of 70.53%, while discouraging task specialization leads to an average page-level R-precision of 68.39%. In comparison, the performance of the standard multitask baseline at the second ANCE episode is 69.28%. These results highlight the benefits of encouraging task specialization and the detrimental effect of discouraging task specialization within adaptive learning. Normalizing task sensitivity using the median is preferred over using the mean or not applying any normalization, as different tasks exhibit variations in magnitude while sharing similar distribution shapes (see Figure <ref>). §.§.§ Task Specialization Figure <ref> plots the histograms of task entropy for the learned parameters. The task entropy for each parameter is calculated with the distribution defined in equation <ref>. We first group parameters into two special bins. The first is a “Task Specific” bin that includes parameters whose entropy is smaller than 0.3, which is the entropy of 95% probability on one task and the 5% uniformly on the rest seven. The “Not Activated” bin includes parameters whose sensitivity w.r.t. all tasks is near zero (<1e-8). TACO significantly improves the fraction of task specific parameters to 22%, in comparison with 19% in naive multitask model (w/o prefix w/o adaptive). It also reduces the fraction of not activated parameters, showing optimizing task specialty also better utilizes the model capacity. Figure <ref> plots the kernel density estimated distribution of task-specific sensitivity in TACO and the standard multitask model for four KILT tasks. We drop outliers that deviates significantly from the median to ease visualization. Notably, TACO exhibits a noticeable reduction in the peak on the low sensitivity side for each task compared to the standard multitasking model. This observation suggests that TACO activates a larger number of parameters and enhances their sensitivity towards individual tasks. §.§.§ Additional Benchmark To test the performance of TACO in a different setup other than KILT, we constructed an additional benchmark containing MS-MARCO <cit.>, ZESHEL <cit.>, a document-level version of FEVER from BEIR <cit.>, and Natural Questions from KILT. We chose this combination for a few reasons. First, we found that few public datasets outside KILT provide sufficiently large and high-quality training data other than MS-MARCO and ZESHEL. Second, each task now has its own KB to retrieve from, making this a rather different setup from KILT in which all tasks share one KB. We compare task-specific retrievers and multitask retrievers trained by TACO and other methods. Table <ref> shows their recall at 100 on the validation split. We see that multitasking is clearly beneficial for this benchmark. The best performance is obtained by CGD and it is the only multitask optimization method that yields noticeable improvements over the standard multitask model. Given that CGD aims to improve multitask learning by encouraging update towards common directions of different tasks, we hypothesize that the need for task specialization is diminished here because the tasks are more similar in difficulty (e.g., in KILT, T-REx and zsRE are much easier than HotpotQA). This experiment sheds light on what multitask settings most benefit from task specialization. § CONCLUSIONS Multitask retrieval has compelling practical advantages such as model simplicity and memory efficiency, but it lags behind task-specific retrieval in the existing literature. We have shown that it is possible to significantly improve the performance of multitask retrieval by promoting task specialization. The key steps are the use of a base model optimized for multitasking with appropriate prompting and a per-parameter adaptive learning technique that upweights the task gradients by the parameters' sensitivity to the task losses. We have achieved strong results on the KILT retrieval benchmark. acl_natbib § ALGORITHM IN MATRIX FORM Alogrithm <ref> is the matrix form of our adaptive learning algorithm. [t!] Task sensitivity-guided adaptive learning § DATA DETAILS See Table <ref> for data statistics and some data-related hyperparameters. We randomly downsample T-REx and zsRE to bring them to the same order of magnitude as the others. We follow <cit.> and use temperature-scaled mixing sampling strategy to compute batch size for each task k: B_k∝ (N_k/∑_k'=1^K N_k')^1/c for some temperature c (we set it to 4 in our experiments). Here N_k is the dataset size of task k. Note that we compute task loss of each task batch independently instead of mixing all task batches for every optimization step. Each dataset needs to sample different number of batches to cover every training sample in that dataset once. We set the maximum of them as the number of batches that every dataset needs to sample. We shuffle and cycle batch sampling iterators of datasets that finish iterating early. Batch size of each dataset computed by setting mixing temperature c=4 and ∑_k'=1^K N_k'=120 is in Table  <ref>. § OTHER TRAINING DETAILS The data-related hyperparameters, such as maximum input query length and batch size, are listed in Table  <ref>. The training hyperparameters are listed in Table  <ref>. We use NCE loss with cross device in-batch negative mixed with hard negatives to compute each task loss. We sample two hard negatives for each query. We employ a “burn in” period for the first 10% training steps with uniform learning rates for parameters to declare their tendency during adaptive learning. All of our experiments are run on a machine with 8 A100-80GB GPUS. Our implementations are built upon OpenMatch <cit.>. § SOFTMAX TEMPERATURE AND MOMENTUM RATIO Table <ref> shows the impact of softmax temperature on validation R-precision for our adaptive learning. Table <ref> shows the impact of momentum factor on validation R-precision for our adaptive learning. § PASSAGE-LEVEL PERFORMANCE Table <ref> shows the passage-level R-precision on KILT validation data. We also list the passage-level performance from <cit.> for comparison.
http://arxiv.org/abs/2307.02986v1
20230706134059
Lecture Notes: Introduction to random unitary circuits and the measurement-induced entanglement phase transition
[ "Brian Skinner" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.dis-nn", "quant-ph" ]
=1
http://arxiv.org/abs/2307.01910v1
20230704203855
An $\mathfrak{sl}_2$ action on link homology of T(2,k) torus links
[ "Felix Roz" ]
math.GT
[ "math.GT", "math.QA" ]
An 𝔰𝔩_2 action on link homology of T(2,k) torus links Felix Roz August 1, 2023 ===================================================== We determine an 𝔰𝔩_2 module structure on the equivariant Khovanov-Rozanksy homology of T(2,k) torus link following the framework in <cit.>. § INTRODUCTION In 1984, Vaughan Jones discovered the Jones polynomial, an invariant of oriented links which lands in [q,q^-1] based on the representation theory of 𝒰_q(𝔰𝔩_2). One advantage of this invariant is that it admits a combinatorial definition called the Kauffman bracket which determines a Laurent polynomial based on skein relations. Later, Khovanov <cit.> defined a categorification of the Jones polynomial: a bigraded homology theory whose Euler characteristic is the Jones polynomial. Khovanov's original construction involved a simple combinatorial definition but had a sign ambiguity when dealing with functoriality with respect to link cobordisms. Many approaches were taken to resolve the functoriality issue for example by Clark, Morrison, Walker <cit.> or more recently by Sano <cit.>. We are especially interested in the approaches that employ foams <cit.> <cit.> <cit.>. Further work by Khovanov and Rozansky <cit.> gave rise to a homology theory which categorifies the 𝒰_q(𝔰𝔩_n) polynomial, but was defined using matrix factorization which made computations difficult compared to the original combinatorial one. These issues were overcome by the work of Robert and Wagner <cit.> who developed an explicit foam evaluation formula which can be used to redefine Khovanov-Rozansky homology in a manner similar to Khovanov's original construction. In order to understand the structure of the homology theory, Khovanov and Rozansky found the structure of an action by the positive half of a Witt algebra on HOMFLYPT homology <cit.>. Following this work, Qi, Robert, Sussan, and Wagner found that the action of the Witt algebra can be extended to an action on the foams which are used in the combinatorial definition <cit.>. When restricted to a copy of 𝔰𝔩_2 within the Witt algebra, it produces an action at the level of link homology <cit.>. In this paper, we determine the explicit structure of the homology of (2,k)-torus links as representations of 𝔰𝔩_2. Sections 2-4 review webs, foams, link homology and the action of 𝔰𝔩_2. Section 5 shows the computation of H^*(T_2,k) in the 𝔰𝔩_2-equivariant setting. Section 6 determines the structure of the representation. §.§ Acknowledgments I am thankful to Joshua Sussan for introducing me to categorification, link homology, and foams and for guiding me through the work behind this paper. § CONVENTIONS Fix a field 𝐤. For any x ∈𝐤, x = 1- x. Define R_n = 𝐤[X_1, ⋯, X_n]^S_n = 𝐤[E_1, ⋯, E_n] where E_i is the ith elementary symmetric polynomial and deg(X_i) = 2 so that deg(E_i) = 2i. Lastly, fix a non-negative integer N which will be the number of colors for links, webs, and foams. We work in the relative homotopy category rather than the usual homotopy category. A relative homotopy equivalence between chain complexes is a homotopy equivalence in the usual sense which has one direction equivariant with respect to 𝔰𝔩_2. Full details are provided in <cit.>. Foams and webs are always read from bottom to top. § WEBS AND FOAMS A web is a finite, oriented, trivalent graph Γ = (V, E) embedded in the plane ^2, together with a thickness function ℓ: E → that satisfies the flow condition: no vertex may be a source or a sink, and the sum of the labels of the incoming edges must equal the sum of the labels of the outgoing edges. With this definition, every vertex of a web must look like one of the following two types: < g r a p h i c s > or < g r a p h i c s > The first is called a “split” and the second a “merge”. The edge with label a+b is called the “thick” edge, while the edges labeled a and b are the “thin” edges. A foam in ^2 × [0,1] is a collection F of compact, oriented surfaces called facets, glued together along their boundaries. Each facet f is given a label ℓ(f) ∈{0, …, N} called its thickness and a decoration P_f ∈ R_ℓ(f). Finally, we require that every point has a closed neighborhood homeomorphic to one of the following types: * A disk, * The cylinder over a merge or a split, denoted Y^(a,b): < g r a p h i c s > * The cone over the 1-skeleton of a tetrahedron, denoted T^(a,b,c): < g r a p h i c s > The set of points of the second type is a collection of curves called the bindings and the points of the third type are called the singular vertices. For each foam F denote the set of facets F^2, the set of bindings F^1, and the set of singular vertices F^0. In the pictures above, the bindings are marked by black lines and the singular vertices are marked by red dots. In the Y model, the thicknesses of the facets agree with the thicknesses of the edges in the merge or split. This means the thicknesses of facets satisfies the flow condition as well, and we can refer to thick and thin facets around a binding. The binding is oriented so that it agrees with the thin facets and is the opposite of the thick facet. The boundary ∂ F of F is the closure of the set of boundary points that are not contained in any bindings. A foam with empty boundary is called a closed foam To simplify diagrams, the decoration _i denotes the power sum function of of every variable on the facet and _i denotes the power sum of every variable not on the facet. For example, if N=3 a _2 on a facet with thickness 2 is an abbreviation for the polynomial x_1^2+x_2^2. The decoration _1 on the same facet denotes the polynomial x_3^1. Next we define the degree of a foam which adapts the Euler characteristic χ to this setting. For f ∈ F^2, set deg_N(f) = ℓ(f)(N - ℓ(f))χ(f), Let F^1_­- denote the collection of bindings diffeomorphic to intervals and F^1_∘ the bindings diffeomorphic to circles. For s ∈ F^1_­- with a neighborhood diffeomorphic to Y^(a,b), set deg_N(s) = ab + (a+b)(N-a-b), For v ∈ F^0 with a neighborhood diffeomorphic to T^(a,b,c), set deg_N(v) = ab + bc +ac + (a+b+c)(N - a - b -c), Finally for any decorated foam F with homogeneous decorations {P_f}_f ∈ F^2 set deg_N(F) = ∑_f ∈ F^2 deg(P_f) - ∑_f ∈ F^2 deg_N(f) + ∑_s ∈ F^1_­- deg_N(s) - ∑_v ∈ F^0 deg_N(v). Every foam in ^2 × [0,1] is isotopic to a composition of the following basic foams: Polynomial: < g r a p h i c s > Associativity: < g r a p h i c s > < g r a p h i c s > Digon cup and cap: < g r a p h i c s > < g r a p h i c s > Zip and Unzip: < g r a p h i c s > < g r a p h i c s > Cup and Cap: < g r a p h i c s > < g r a p h i c s > Saddle: < g r a p h i c s > § ACTION OF 𝔰𝔩_2 ON FOAMS Let 𝔰𝔩_2 be the Lie algebra over 𝐤 generated by symbols 𝐞, 𝐡, 𝐟 with the relations [𝐡,𝐞] = 2𝐞, [𝐡, 𝐟]= -2 𝐟, [𝐞, 𝐟] = 𝐡. In this section we describe the action of 𝔰𝔩_2 on foams that was defined in <cit.>. All generators act as 0 on traces of isotopies. The action of the generators of 𝔰𝔩_2 is first defined on basic foam, then extended to all foams by the Leibniz rule Let t_1 and t_2 be two fixed elements from R_N. 𝐞 acts as - ∑_i ∂/∂ x_i on polynomials and as 0 on all other basic foams. 𝐡 acts on a polynomial P as -deg(P) · P. 𝐟 acts on a polynomial P as - ∑_i x_i^2 ∂/∂ x_i(P). Otherwise, 𝐡 and 𝐟 act as follows: 𝐡· < g r a p h i c s > = 𝐡· < g r a p h i c s > = 0 𝐡· < g r a p h i c s > = ab(t_1+t_2) < g r a p h i c s > 𝐡· < g r a p h i c s > = ab(t_1+t_2) < g r a p h i c s > 𝐡· < g r a p h i c s > = -ab(t_1+t_2) < g r a p h i c s > 𝐡· < g r a p h i c s > = -ab(t_1+t_2) < g r a p h i c s > 𝐡· < g r a p h i c s > = a(N - a) < g r a p h i c s > 𝐡· < g r a p h i c s > = a(N - a) < g r a p h i c s > 𝐡· < g r a p h i c s > = -a(N - a) < g r a p h i c s > 𝐟· < g r a p h i c s > = 𝐟· < g r a p h i c s > = 0 𝐟· < g r a p h i c s > = -t_1 < g r a p h i c s > -t_2 < g r a p h i c s > 𝐟· < g r a p h i c s > = -t_1 < g r a p h i c s > -t_2 < g r a p h i c s > 𝐟· < g r a p h i c s > = t_1 < g r a p h i c s > + t_2 < g r a p h i c s > 𝐟· < g r a p h i c s > = t_1 < g r a p h i c s > + t_2 < g r a p h i c s > 𝐟· < g r a p h i c s > = -1/2 < g r a p h i c s > -1/2 < g r a p h i c s > 𝐟· < g r a p h i c s > = -1/2 < g r a p h i c s > -1/2 < g r a p h i c s > 𝐟· < g r a p h i c s > = 1/2 < g r a p h i c s > + 1/2 < g r a p h i c s > §.§ Green Dots and Twists In order to ensure that the braiding complexes which are defined in the next section are equivariant with respect to the 𝔰𝔩_2-action, the action needs to be twisted. These twists are recorded by allowing two types of green dots with k-valued multiplicities as new decorations on webs. When a green dot is found in the source or target of a web, the action of 𝔰𝔩_2 is modified by adding an extra term. Note that the sign of the extra term differs depending on whether the green dot is in the source or target of the given foam. The action of 𝐞 ignores all green dots and the action of 𝐡 and 𝐟 is defined below: 𝐡· < g r a p h i c s > = 𝐡· < g r a p h i c s > - λ < g r a p h i c s > , 𝐡· < g r a p h i c s > = 𝐡· < g r a p h i c s > + λ < g r a p h i c s > 𝐡· < g r a p h i c s > = 𝐡· < g r a p h i c s > - λ < g r a p h i c s > , 𝐡· < g r a p h i c s > = 𝐡· < g r a p h i c s > + λ < g r a p h i c s > 𝐟· < g r a p h i c s > = 𝐟· < g r a p h i c s > + λ < g r a p h i c s > , 𝐟· < g r a p h i c s > = 𝐟· < g r a p h i c s > - λ < g r a p h i c s > 𝐟· < g r a p h i c s > = 𝐟· < g r a p h i c s > + λ < g r a p h i c s > , 𝐟· < g r a p h i c s > = 𝐟· < g r a p h i c s > - λ < g r a p h i c s > § DEFINITION OF LINK HOMOLOGY §.§ Foam evaluation The webs and foams above can be naturally assembled into a category. For a foam F embedded in ^2 × [0,1], F_t = F ∩ (^2 ×{t}) is a web. Thus any foam can be regarded as a morphism from F_0 to F_1. Note that this means foams should be read from bottom to top. Let 𝐅𝐨𝐚𝐦 be the additive closure of the category where objects are q-degree shifted webs, morphisms are free R-linear combinations of foams, and composition is given by stacking foams. In [RW20], Robert and Wagner defined a 𝔤𝔩_N-evaluation of a closed decorated foam F as an element ⟨ F ⟩∈ R_N. The exact formula for the evaluation is not important for this paper, but it provides a way to produce a TQFT on 𝐅𝐨𝐚𝐦. For any web Γ, Robert-Wagner evaluation lets us define an R_N-bilinear form ⟨· ; ·⟩_N on Hom_𝐅𝐨𝐚𝐦(∅, Γ) by ⟨ F ; G ⟩_N = ⟨G∘ F ⟩_N, where G: Γ→∅ is the mirror image of a foam G: ∅→Γ. For any web Γ, ℱ_N(Γ) = 𝒱_N(Γ) / Ker⟨· ; ·⟩_N is called the state space of Γ, and by the universal construction it extends to a functor ℱ: 𝐅𝐨𝐚𝐦→ R_N-mod. From here on we restrict to the case when N=2 and omit the subscript N when it is clear. For example, R will always refer to R_2. In diagrams of webs and foams, facets or edges of thickness 1 will be colored blue and facets or edges of thickness 2 will be colored red. On blue facets, n dots will represent the polynomial x^n where x is the single variable allowed on that facet. §.§ Braiding Complexes Let A and B denote the right and left crossings respectively: A = < g r a p h i c s > , B = < g r a p h i c s > Link homology is defined by assigning cohomological braiding complexes to the two crossing types: C(A) = < g r a p h i c s > q^-1 < g r a p h i c s > C(B) = q < g r a p h i c s > < g r a p h i c s > In both cases the underlined term is in cohomological degree 0. For an arbitrary link L, we define C(L) to be the hypercube of resolutions constructed by taking the tensor product of braiding complexes corresponding to its component crossings. The cohomology of the total complex of ℱ(C(L)) is the cohomology H(L) of the link. H(L) is an invariant of links and it carries an action of 𝔰𝔩_2. See <cit.>. § COMPUTATION OF T(2,K) HOMOLOGY Fix k ≥ 2 and consider the (2,k)-torus link T_2,k. §.§ Local Relations To streamline the computation, we recall a few local relations on foams following <cit.>: Dot Reduction by Symmetric Coefficients: < g r a p h i c s > = E_1 < g r a p h i c s > - E_2 < g r a p h i c s > Dot Migration: < g r a p h i c s > = E_1 < g r a p h i c s > - < g r a p h i c s > Neck Cutting: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > - E_1 < g r a p h i c s > , < g r a p h i c s > = - < g r a p h i c s > Spheres: < g r a p h i c s > = 0 < g r a p h i c s > = 1 < g r a p h i c s > = -1 Detachments: < g r a p h i c s > = < g r a p h i c s > , < g r a p h i c s > = < g r a p h i c s > §.§ Special Webs and Foams A chain of k many A-crossings will be denoted A_k The following are distinguished green-dotted webs H^n = < g r a p h i c s > , Φ^n = < g r a p h i c s > , I = < g r a p h i c s > . The following are distinguished foams between the above webs ϵ = < g r a p h i c s > ϵ^∙_C = < g r a p h i c s > ϵ^∙_R = < g r a p h i c s > ι = < g r a p h i c s > m_L = < g r a p h i c s > m_R = < g r a p h i c s > h = < g r a p h i c s > h^∙_L = < g r a p h i c s > h^∙_R = < g r a p h i c s > ϕ^∙_R = < g r a p h i c s > ϕ^∙_C = < g r a p h i c s > z = < g r a p h i c s > u = < g r a p h i c s > §.§ Computation We determine the homology of the T(2,k) link by first determining the complex of a link which contains a chain of half twists. This is accomplished by induction, applying the following two lemmas which reduce the part of the complex corresponding to each crossing in the relative homotopy category. The following diagram is 𝔰𝔩_2-equivariant, commutative, and has exact columns: [ampersand replacement=&] qH^1&& q^-1H^0&& q^-2 I Φ^0&&q^-1H^0⊕ q^-1H^0&& q^-2I q^-1H^0&& q^-1H^0&&∅["h", from=5-1, to=5-3] [from=5-3, to=5-5] ["[ h; h ]", from=5-3, to=3-3] ["ι"', from=5-1, to=3-1] ["ϵ"', from=3-1, to=1-1] ["[ m_R; m_L ]", from=3-1, to=3-3] ["[ h -h ]"', from=3-3, to=1-3] ["[ z -z ]", from=3-3, to=3-5] [from=5-5, to=3-5] ["h^∙_R - h^∙_L", from=1-1, to=1-3] ["z", from=1-3, to=1-5] ["id", from=3-5, to=1-5] First, we will show equivariance. The foams m_R, m_L and z are all included in braiding complexes and are thus known to be equivariant. The foam h is simply the identity foam and is trivially equivariant. Thus we will only show that ι, ϵ, and h_R^∙ - h_L^∙ are equivariant. By the Leibniz rule for any g ∈𝔰𝔩_2, g · F x = (g · F) x + F (g · x) = F (g · x). Thus a foam F is equivariant if and only if g · F = 0 for all g ∈𝔰𝔩_2. The generator 𝐞 trivially kills these foams so we only check the generators 𝐡 and 𝐟. In the diagrams below we omit green dots which are common to the source and target and put the result of the twisting in the second line. 𝐡· < g r a p h i c s > = (t_1 + t_2) < g r a p h i c s > - t_1 < g r a p h i c s > - t_2 < g r a p h i c s > = 0 𝐟· < g r a p h i c s > = -t_1 < g r a p h i c s > -t_2 < g r a p h i c s > +t_1 < g r a p h i c s > +t_2 < g r a p h i c s > =0 𝐡· < g r a p h i c s > = (t_1 + t_2) < g r a p h i c s > +t_1 < g r a p h i c s > + t_1 < g r a p h i c s > - 1 < g r a p h i c s > -1 < g r a p h i c s > = 0 𝐟· < g r a p h i c s > = -t_1 < g r a p h i c s > -t_2 < g r a p h i c s > -t_1 < g r a p h i c s > -t_2 < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > = - < g r a p h i c s > - < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > = < g r a p h i c s > - E_1 < g r a p h i c s > - < g r a p h i c s > - < g r a p h i c s > + E_1 < g r a p h i c s > + < g r a p h i c s > =0 𝐡·( < g r a p h i c s > - < g r a p h i c s > ) = -2 ( < g r a p h i c s > - < g r a p h i c s > ) + ( < g r a p h i c s > - < g r a p h i c s > ) + ( < g r a p h i c s > - < g r a p h i c s > ) =0 𝐟·( < g r a p h i c s > - < g r a p h i c s > ) = E_1 < g r a p h i c s > - E_1 < g r a p h i c s > - < g r a p h i c s > - < g r a p h i c s > + < g r a p h i c s > + < g r a p h i c s > = E_1 < g r a p h i c s > - E_1 < g r a p h i c s > +E_1 < g r a p h i c s > -E_1 < g r a p h i c s > =0 It is easy to see that the bottom left and top right squares are commutative. The commutativity of the top square follows from the neck cutting relation: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > -E_1 < g r a p h i c s > Lastly, we check exactness of the columns. Since the undotted sphere is equal to 0, the kernel of ϵ consists of the elements of the state space of Φ^0 which have no dots on the inner circle. This is exactly image of ι, so the left column is exact. Since the h foams are identities for H, the middle column is clearly exact. The middle row is C(D_2) and the bottom row is acyclic. Thus this lemma gives a simplification of C(D_2) in the relative homotopy category. The following diagram is 𝔰𝔩_2-equivariant, commutative, and has exact columns: [ampersand replacement=&] q^kH^k&& q^k-2H^k-1&&& q^k-4H^k-2 q^k-1Φ^k-1&& q^k-2H^k-1⊕ q^k-2 H^k-1&&& q^k-4H^k-2 q^k-2H^k-1&& q^k-2H^k-1&&&∅["h", from=7-1, to=7-3] [from=7-3, to=7-6] ["[ (-1)^kh; h ]", from=7-3, to=4-3] ["ι"', from=7-1, to=4-1] ["ϵ"', from=4-1, to=1-1] ["[ ϵd̃^0; m_L ]", from=4-1, to=4-3] ["[ h (-1)^k-1h ]"', from=4-3, to=1-3] ["[ d^0 (-1)^k-1d^0 ]", from=4-3, to=4-6] [from=7-6, to=4-6] ["d^-1", from=1-1, to=1-3] ["d^0", from=1-3, to=1-6] ["h", from=4-6, to=1-6] The bottom left square commutes by the bubble and sphere relations and the top left square commutes by the neck cutting relation. The rest is similar to the preceding lemma. The complex C(A_k) associated to the diagram with a chain of k half twists is: 0 q^k-1H^k-1 q^k-3H^k-2⋯ q^-k+3H^1 q^-k+1H^0 q^-k I 0 where d^k-1 = z, d^k-j = h^∙_R - h^∙_L when j is even and d^k-j = h^∙_R + h^∙_L + E_1 h when j≠ 1 is odd. The underlined term is in cohomological degree 0. The proof follows by induction. The base case k=2 is given by <ref>. Suppose that the complex C(A_k) is as above. Then the complex C(A_k+1) is: [ampersand replacement=&] 0 & q^k-2H^k-1& q^k-4H^k-2&⋯& q^-kH^0& q^-k-1I & 0 0 & q^k-1Φ^k-1& q^k-3Φ^k-2&⋯& q^-k+1Φ^0& q^-k H^0 & 0 ["-d^k-1", from=1-5, to=1-6] ["", from=1-6, to=1-7] ["-d^1", from=1-3, to=1-4] ["-d^0", from=1-2, to=1-3] [from=1-1, to=1-2] ["-d^k-2", from=1-4, to=1-5] ["m_L", from=2-2, to=1-2] ["d̃^0"', from=2-2, to=2-3] [from=2-1, to=2-2] ["d̃^1"', from=2-3, to=2-4] ["m_L", from=2-3, to=1-3] ["d̃^k-2"', from=2-4, to=2-5] [""', from=2-6, to=2-7] ["d̃^k-1"', from=2-5, to=2-6] ["m_L", from=2-5, to=1-5] ["z", from=2-6, to=1-6] Where d̃^k-1 = m_R, d̃^k-j = ϕ^∙_R - ϕ^∙_L when j is even and d̃^k-j = ϕ^∙_R + ϕ^∙_L + E_1 ϕ, when j ≠ 1 is odd. These are simply the connected sums of the d^k-j foams with the h foam. Note that by the induction hypothesis, the part of C(A_k) excluding the leftmost term is the complex q^-1C(A_k-1). Thus the portion of the diagram above excluding the leftmost column must be isomorphic to q^-1 C(A_k). Therefore the entire diagram is isomorphic to: [ampersand replacement=&] 0 & q^k-2H^k-1&& q^k-4H^k-2&⋯& q^-kH^0& q^-k-1 I & 0 0 & q^k-1Φ^k-1&& q^k-2H^k-1[from=1-7, to=1-8] ["d^k-1", from=1-6, to=1-7] ["d^k-2", from=1-5, to=1-6] ["d^1", from=1-4, to=1-5] ["(-1)^k-1d^0", from=1-2, to=1-4] [from=1-1, to=1-2] ["m_L", from=2-2, to=1-2] ["ϵd̃^0"', from=2-2, to=2-4] [from=2-1, to=2-2] ["d^0", from=2-4, to=1-4] The final square is the middle row of the diagram in <ref> so in the relative homotopy category, the diagram is isomorphic to: 0 q^kH^k q^k-2H^k-1 q^k-4H^k-2⋯ q^-kH^0 q^-k-1 I 0 After reindexing, this is the desired complex for C(D_k+1). Now we consider the case when the chain D_k is a part of a (2,k)-torus link. The web I is replaced by a pair of oppositely oriented circles which we will denote O ⊗ O, and each H is replaced by the theta web: Θ = < g r a p h i c s > Θ^n = < g r a p h i c s > The foam h is replaced with the theta foam, and the foams h^∙_L and h^∙_R are replaced by the same dotted theta foam: θ = < g r a p h i c s > θ^∙ = < g r a p h i c s > Finally, the zip foam z is replaced by a singular pair of pants: p = < g r a p h i c s > Now the differentials collapse so that d^k-1 = p, d^k-j = 0 when j is even, and d^k-j = 2θ^∙ - E_1 θ when j≠ 1 is odd: 0 q^k-1Θ^k-1⋯ q^-k+5Θ^2 q^-k+3Θ^1 q^-k+1Θ^0 q^-kO ⊗ O 0 The state spaces for O and Θ are generated by their dotted and undotted cup foams: θ_ι = < g r a p h i c s > θ^∙_ι = < g r a p h i c s > o_ι = < g r a p h i c s > o^∙_ι = < g r a p h i c s > To determine the action of d^-1 = p on this basis for Θ^0 we first simplify the following foam: < g r a p h i c s > = < g r a p h i c s > + < g r a p h i c s > - E_1 < g r a p h i c s > = < g r a p h i c s > - < g r a p h i c s > = < g r a p h i c s > - < g r a p h i c s > This determines the following action on our basis: θ_ι ↦ o_ι^∙⊗ o_ι - o_ι⊗ o_ι^∙ θ_ι^∙ ↦ o_ι^∙⊗ o_ι^∙ - E_1 o_ι⊗ o^∙_ι + E_2 o_ι⊗ o_ι The kernel is clearly trivial, and the cokernel is freely generated by o_ι⊗ o_ι and o_ι^∙⊗ o_ι which have unshifted q-degree -2 and 0 respectively. Thus H^k(T_2,k) = q^-2-kR ⊕ q^-kR For odd j, d^-j = 2θ^∙ - E_1h acts on the basis for θ^j-1 by: θ_ι ↦ 2θ_ι^∙ - E_1θ_ι θ_ι^∙ ↦ E_1θ_ι^∙ - 2E_2θ_ι If aθ_ι+bθ^∙_ι were in the kernel, then 2a = -E_1b and E_1a = -2E_2b. These relations imply E_1^2b = 4E_2b. Since E_1 and E_2 are independent, b=0 and thus a=0. Thus the kernel is trivial so H^k-j(T_2,k) = 0 for all odd j ≠ 1. The cokernel of d^k-j is generated by θ_ι and θ_ι^∙ with the relations E_1θ_ι = 2θ^∙_ι and 2E_2 θ_ι = E_1 θ^∙_ι. Since we require 2 to be invertible in the base ring, this reduces to a single generator θ_ι with unshifted q-degree -1 modulo the relation (E_1^2 - 4E_2)θ_ι = 0. Recalling the q-shift applied to the term at this cohomology degree: H^k-j+1(T_2,k) = q^-k+2j-2 k[E_1,E_2]/⟨ E_1^2-4E_2 ⟩. As a graded k[E_1,E_2]-module H^k(T_2,k) ≅ q^-k-2 k[E_1,E_2] ⊕ q^-k k[E_1,E_2] H^k-j(T_2,k) ≅ 0 for all odd j ≠ 1 H^k-j(T_2,k) ≅ q^-k+2j-2 k[E_1,E_2]/⟨ E_1^2 + 4E_2 ⟩ for all even 0 < j < k H^0(T_2,k) ≅ q^k-2 k[E_1,E_2] ⊕ q^k k[E_1,E_2] if k is even § ACTION OF 𝔰𝔩_2 We let 𝔰𝔩_2 act on the ring R = k[E_1,E_2] of symmetric polynomials by: 𝐞· E_1 = -2 𝐞· E_2 = -E_1 𝐡· E_1 = -2E_1 𝐡· E_2 = -4E_2 𝐟· E_1 = E_1^2 - 2E_2 𝐟· E_2 = E_1 E_2 This action comes from the action of the Lie algebra 𝔐 generated by symbols (L_n)_n ∈ subject to the relations [L_n, L_m] = (n-m)L_m+n for all n,m ∈. 𝔐 acts on a polynomial Q ∈ k[x_1, ⋯, x_N] by L_n · Q = - ∑_i=1^k x_i^n+1∂ Q/∂ x_i The action of 𝔰𝔩_2 defined on the ring R above comes from the identifications E_1 = X_1 + X_2, E_2 = X_1X_2, 𝐞 = L_-1, 𝐡 = 2L_0, 𝐟 = - L_1 Now we can compute the action of 𝔰𝔩_2 on the generators of cohomology. In what follows we will replace normalize the homology H^*(T_2,k) by replacing it with q^kH^*(T_2,k). In the shifted case, 𝐡 always acts by multiplying by the negative degree of a vector. This has the advantage that the graded dimensions of the homology as a vector space coincide with the dimensions of its weight space decomposition. We analyze the 𝔰𝔩_2 structure one cohomological degree at a time. Cohomology degree k is generated by o_ι⊗ o_ι and o_ι^∙⊗ o_ι which have q-degree -2 and 0 respectively. The generators of 𝔰𝔩_2 act as follows: 𝐞(o_ι⊗ o_ι) = 0 𝐞(o_ι^∙⊗ o_ι) = - o_ι⊗ o_ι 𝐡(o_ι⊗ o_ι) = 2 o_ι⊗ o_ι 𝐡(o_ι^∙⊗ o_ι) = 0 𝐟(o_ι⊗ o_ι) = - E_1 o_ι⊗ o_ι 𝐟(o_ι^∙⊗ o_ι) = - E_2 o_ι⊗ o_ι Now we will determine the graded dimension of H^k(T_2,k). An arbitrary polynomial ring k[x] where deg(x) =n, has graded dimension 1 + q^n + q^2n + q^3n + ⋯ = 1/1-q^n Since gdim(E_1) = 2 and gdim(E_2) = 4, gdim(k[E_1, E_2]) = gdim(k[E_1]) · gdim(k[E_2]) = 1/1-q^2·1/1-q^4 Finally, gdim(q^-2 k[E_1,E_2] ⊕ k[E_1,E_2]) = q^-2/(1-q^2)(1-q^4) + 1/(1-q^2)(1-q^4) = q^-21+q^2/(1-q^2)(1-q^4) = q^-21/(1-q^2)^2 = q^-2 + 2q^0 + 3q^2 + 4q^4 + ⋯ Now we can read off dim(A_λ) = - λ/2 + 2 for all even weights λ≤ 2. For any weight λ, the operator 𝐞 maps the weight space A_λ to A_λ+2. Since dim(A_λ) = dim(A_λ+2) + 1, 𝐞 has a non trivial kernel in each A_λ and thus a highest weight vector v_λ in every weight. Let S_λ be the 𝔰𝔩_2-submodule generated by v_λ. For weights λ≤ -2, S_λ is the Verma module M(λ). Recall that an irreducible 𝔰𝔩_2-module with highest weight λ≥ 0 can only be in extension with a module with highest weight -λ - 2. The highest weight of the entire module A is 2, thus for λ≤ 6, none of the submodules isomorphic to M(-λ) can be in extension with any other submodule. However, submodules of weights 2 and be in extension with those of weight -4 and submodules of weight 0 can be in extension with those of weight -2. Therefore we need to examine these weights more carefully. By manipulating q-degrees, we can produce the following bases which will be helpful when searching for vectors with a desired action under 𝔰𝔩_2. A_2 = ⟨ o_ι⊗ o_ι⟩ A_0 = ⟨ E_1 o_ι⊗ o_ι, o_ι^∙⊗ o_ι⟩ A_-2 = ⟨ E_1^2 o_ι⊗ o_ι, E_1 o_ι^∙⊗ o_ι, E_2 o_ι⊗ o_ι⟩ A_-4 = ⟨ E_1E_2 o_ι⊗ o_ι, E_1^3 o_ι⊗ o_ι, E_1^2 o^∙_ι⊗ o_ι, E_2 o^∙_ι⊗ o_ι⟩ To begin, we can find lowest weight vectors in each of the above weights: v_2 = o_ι⊗ o_ι v_0 = E_1 o_ι⊗ o_ι - 2 o_ι^∙⊗ o_ι v_-2 = E_1^2 o_ι⊗ o_ι - 4 E_2 o_ι⊗ o_ι v_-4 = E_1^3 o_ι⊗ o_ι - 4 E_1E_2 o_ι⊗ o_ι - 2 E_1^2 o_ι^∙⊗ o_ι + 8 E_2 o_ι^∙⊗ o_ι Simple computations show that S_2 and S_0 are isomorphic to the finite dimensional simple modules L(2) and L(0). v_2 = o_ι⊗ o_ι 𝐟v_2 = -E_1 o_ι⊗ o_ι 𝐟^2v_2 = 2E_2 o_ι⊗ o_ι 𝐟^3v_2 = 0 v_0 = E_1 o_ι⊗ o_ι - 2 o_ι^∙⊗ o_ι 𝐟v_0 = 0 Therefore the only submodule with highest weight 0, and thus the only submodule that S_-2 can only be in extension with, is S_0 ≅ L(0). Only the action of 𝐞 increases weights, but it kills the vectors in S_0 which have weight -2 = 0-2. Thus S_-2 cannot be in extension with any other submodule. For the same reason, S_-4 cannot be in extension with S_2 ≅ L(2), the only submodule with highest weight 2. However, in weight -2, we also find the vector w_-2 = E_1^2 o_ι⊗ o_ι - 2E_1 o_ι^∙⊗ o_ι with the property 𝐞w_-2 = v_0. Now we must determine the module structure generated by v_0 and w_-2. If we quotient 𝒰(𝔰𝔩_2)⟨ v_0, w_-2⟩ by ⟨ v_0 ⟩≅ S_0 ≅ L(0), then we are left with the Verma module M(-2) giving us a short exact sequence 0 → L(0) →𝒰(𝔰𝔩_2)⟨ v_0, w_-2⟩→ M(-2) → 0 which presents 𝒰(𝔰𝔩_2)⟨ v_0, w_-2⟩ as an extension of M(-2) by L(0). There is a unique such non-trivial extension, M^*(0), so 𝒰(𝔰𝔩_2)⟨ v_0, w_-2⟩ must be isomorphic to it. Similarly, we can find a weight -4 vector w_-4 such that 𝐞 w_-4 = 3 𝐟^3 v_2: w = E_1^3 o_ι⊗ o_ι - 4 E_1 E_2 o_ι⊗ o_ι - 4 E_1^2 o_ι^∙⊗ o_ι + 10 E_2 o_ι^∙⊗ o_ι Again we can analyze the structure of the submodule generated by v_2 and w_-4 by a short exact sequence: 0 → L(2) →𝒰(𝔰𝔩_2)⟨ v_2, w_-4⟩→ M(-4) → 0 M^*(2) is the unique such extension between M(-4) and L(2). As an 𝔰𝔩_2-module: H^k(T_2,k) ≅ M^*(2) ⊕ M^*(0) ⊕⊕_r=2^∞ M(-2r) Next, we analyze the structure of H^k-j(T_2,k) for 0 < j < k even. The state space of Θ^j-1 has a single generator θ_ι which has q-degree 2j-2. The action of 𝔰𝔩_2 is as follows: 𝐞(θ_ι ) = 0 𝐡(θ_ι ) = -2(j-1) θ_ι 𝐟(θ_ι ) = E_1(j-1) θ_ι Since j is at least 2, the height weight of these modules are at most -2. Thus as an 𝔰𝔩_2-module H^k-j(T_2,k) ≅ M(-2j+2) Finally, if k is even, then H^0(T_2,k) is the state space of q^2k-1Θ^k-1. As a vector space it is generated by θ_ι and θ_ι^∙ which have q-degrees 2k-2 and 2k respectively. The action is as follows: 𝐞(θ_ι ) = 0 𝐞(θ_ι^∙ ) = -θ_ι 𝐡(θ_ι ) = -2(k-1) θ_ι 𝐡(θ_ι^∙ ) = -2k θ_ι^∙ 𝐟(θ_ι ) = E_1(k-1) θ_ι 𝐟(θ_ι^∙ ) = E_1k θ_ι^∙ - E_2 θ_ι Just as we determined gdim(H^k(T_2,k)) = q^-2/(1-q^2)^2, gdim(H^0(T_2,k)) = q^2k-21/(1-q^2)^2 = q^2k-2 + 2q^2k + 3q^2k+2 + 4q^2k+4 + ⋯ The relation between dimensions of weight spaces are the same as in cohomology degree 0, so we also have a highest weight vector in each weight space. However, the highest weight is -2k+2 which is negative as long as k≥ 2. Thus every highest weight vector generates a Verma module and there are no extensions between them. Fix k ≥ 2 and consider the (2,k)-torus link T_2,k. As an 𝔰𝔩_2-module and forgetting the q-grading H^k(T_2,k) ≅ M^*(2) ⊕ M^*(0) ⊕⊕_r=1^∞ M(-2r) H^k-j(T_2,k) ≅ 0 for all odd j ≠ 1 H^k-j(T_2,k) ≅ M(-2j+2) for all even 0 < j < k H^0(T_2,k) ≅⊕_r=k-1^∞ M(-2r) if k is even When defining the 𝔰𝔩_2 action we needed to fix two parameters t_1 and t_2 from our base field. The homology of all (2,k)-torus links is independent of these parameters, but it is unclear if this is the case in general.
http://arxiv.org/abs/2307.02031v1
20230705052838
Improving Automatic Parallel Training via Balanced Memory Workload Optimization
[ "Yujie Wang", "Youhe Jiang", "Xupeng Miao", "Fangcheng Fu", "Xiaonan Nie", "Bin Cui" ]
cs.LG
[ "cs.LG", "cs.DB", "cs.DC" ]
Improving Automatic Parallel Training via Balanced Memory Workload Optimization Yujie Wang*, Youhe Jiang*, Xupeng Miao, Fangcheng Fu, Xiaonan Nie, Bin Cui Yujie Wang, Fangcheng Fu, Xiaonan Nie and Youhe Jiang are with the Key Lab of High Confidence Software Technologies (MOE), School of CS, Peking University, Beijing 100871, China. E-mail: {alfredwang, ccchengff, xiaonan.nie}@pku.edu.cn, [email protected] Xupeng Miao is with the Computer Science Department of Carnegie Mellon University. E-mail: [email protected] Bin Cui is with the Key Lab of High Confidence Software Technologies (MOE), School of CS, Peking University, Beijing 100871, and Institute of Computational Social Science, Peking University (Qingdao), China. E-mail: [email protected]. August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ [1]Equal contribution. Transformer models have emerged as the leading approach for achieving state-of-the-art performance across various application domains, serving as the foundation for advanced large-scale deep learning (DL) models. However, efficiently training these models across multiple GPUs remains a complex challenge due to the abundance of parallelism options. Existing DL systems either require manual efforts to design distributed training plans or limit parallelism combinations to a constrained search space. In this paper, we present , a novel system framework that integrates multiple prevalent parallelism dimensions and automatically identifies the most efficient hybrid parallelism strategy. To effectively navigate this vast search space, we employ a decision tree approach for decomposition and pruning based on intuitive insights. We further utilize a dynamic programming search algorithm to derive the optimal plan. Moreover, to improve resource utilization and enhance system efficiency, we propose a bi-objective optimization workflow that focuses on workload balance. Our evaluations on different Transformer models demonstrate the capabilities of in automating distributed training under varying GPU memory constraints. Across all tested scenarios, consistently achieves superior system throughput, surpassing previous approaches that rely on limited parallelism strategies. Transformer models have emerged as the leading approach for achieving state-of-the-art performance across various application domains, serving as the foundation for advanced large-scale deep learning (DL) models. However, efficiently training these models across multiple GPUs remains a complex challenge due to the abundance of parallelism options. Existing DL systems either require manual efforts to design distributed training plans or limit parallelism combinations to a constrained search space. In this paper, we present Galvatron-BMW, a novel system framework that integrates multiple prevalent parallelism dimensions and automatically identifies the most efficient hybrid parallelism strategy. To effectively navigate this vast search space, we employ a decision tree approach for decomposition and pruning based on intuitive insights. We further utilize a dynamic programming search algorithm to derive the optimal plan. Moreover, to improve resource utilization and enhance system efficiency, we propose a bi-objective optimization workflow that focuses on workload balance. Our evaluations on different Transformer models demonstrate the capabilities of Galvatron-BMW in automating distributed training under varying GPU memory constraints. Across all tested scenarios, Galvatron-BMW consistently achieves superior system throughput, surpassing previous approaches that rely on limited parallelism strategies. Transformers, Distributed Learning, Automatic Parallelism § INTRODUCTION Transformer models have achieved great success in a wide range of deep learning (DL) applications in recent years, such as computer vision (CV) <cit.>, natural language processing (NLP) <cit.>, graph learning <cit.> and recommendation systems <cit.>. For example, many Transformer variants (e.g., BERT <cit.>, GPT-2 <cit.>, T5 <cit.>) are leading the state-of-the-art performance in various NLP tasks such as machine translation and question answering. Transformers are also applicable to image recognition (e.g, ViT <cit.>, Swin Transformer <cit.>) and multimodal tasks (e.g, CLIP <cit.>, DALL-E <cit.>). Due to their superior performance, Transformers are becoming increasingly important in modern artificial intelligence industries. Empirical evidence indicates that scaling model parameters is an effective path towards model performance improvements <cit.>. For instance, the original Transformer only has millions of model parameters while GPT-2 has 1.5 billion with superior performance <cit.>. Such large amounts of parameters also incur high computational and memory costs even for emerging accelerators like GPUs. With the increasing model scales, building and designing Transformers demand more system optimizations, and how to perform efficient Transformers training is becoming more challenging. Distributed DL systems adopt data and model parallelism to improve the training efficiency by utilizing multiple GPU devices. Data parallelism divides the large volume of input data into multiple parts and each device is only responsible for partial data <cit.>. It requires each device to store a whole model replica, suffering from large model scales. Model parallelism is a more promising direction that partitions the model from different parallelism dimensions and makes each device store a subset of model parameters, such as tensor parallel <cit.> and pipeline parallel <cit.>. Various choices of the parallelism strategies lead to distinct memory consumption, communication overheads, and execution efficiency. However, directly applying these techniques to scaling Transformers is facing crucial challenges in both system efficiency and usability. Some recent advanced methods have been proposed to automatically find the parallelism strategies through the fine-grained combination of data and model parallelism for individual operators in the model. For example, OptCNN <cit.>, FlexFlow <cit.>, Tofu <cit.>, and TensorOpt <cit.> consider both data and tensor parallelism and use different search algorithms to optimize the execution plans. PipeDream <cit.> and DAPPLE <cit.> combine pipeline parallelism with data parallelism to improve the efficiency. Unfortunately, existing approaches only support limited parallelism dimensions (i.e., data parallelism and rare model parallelism dimensions) or rely on strong model and hardware configurations (i.e., expert-designed parallelism strategy) and result in sub-optimal performance in practice. To the best of our knowledge, there are few prior works considering the automatic parallelism for large-scale Transformers with a complex search space including multiple parallelism dimensions. In this approach, we propose , a novel automatic parallel training system for Transformer models over multiple GPUs. Our target is to integrate data parallelism with a variety of model parallelism dimensions, provide a rarely larger search space (compared with previous approaches), and find the optimal hybrid parallelism strategies in an efficient manner. However, such an integration brings an explosive growth of the search space and cannot be directly explored as usual. Therefore, we are interested in the following question: How can we exploit as many parallelism dimensions as possible and efficiently explore the search space in the meanwhile? We study five parallelism paradigms, four of which are popular parallelism paradigms in the distributed training of Transformer models, including data parallelism (DP), sharded data parallelism (SDP) <cit.>, tensor parallelism (TP), and pipeline parallelism (PP). Besides, we also take into account activation checkpointing (CKPT) as a special parallelism dimension, which distributes the training memory workload to the backward computation through checkpoints. These parallelism paradigms have distinct memory consumption and communication overheads and no single paradigm could beat the others on both sides. The search space of automatic parallelism should include the arbitrary combinations of them. Inspired by some key intuitions from our observations and analysis, we first propose a decision-tree structure to decompose the search space and perform pruning to remove the inefficient combinations. To determine the final distributed execution plan, we then propose a dynamic programming search algorithm to utilize the optimal substructure property of this problem. Based on these, we provide , which not only targets automatic parallelism for Transformer model training, but also considers the Balancing trade-off between Memory and computation Workloads across devices. During the search process, provides the required computation and communication costs and memory consumption through a cost estimator. It is worth mentioning that the cost estimation in considers the GPU performance slowdown from computation and communication overlapping, which has been ignored for a long time in previous approaches. We provide an implementation of over PyTorch. Unlike existing toolbox-like systems (e.g., DeepSpeed <cit.>, Megatron <cit.>) relying on users' expertise and significant tuning efforts, 's automatic parallelism only requires a few lines' modifications on the original training script. Our evaluation selects four representative Transformers, including both NLP (i.e., BERT and T5) and CV (i.e., ViT, Swin Transformer). The experiments show that could significantly outperform the four pure parallelisms and existing automatic parallelisms with limited dimensions (i.e., DP+TP and DP+PP) under various device memory budgets. We summarize our contributions as follows: 1) We enlarge the explored dimension of automatic parallelism for Transformer training to five parallelism dimensions, and introduce a novel decision-tree abstraction to decompose the large search space. 2) We design a novel parallelism optimization method to automatically find the most efficient hybrid parallelism strategy based on the estimated costs. 3) We consider both memory consumption and computation workload through a bi-objective optimization framework to maximize the hardware utilization during training. 4) We build system that supports larger models' training and achieves up to 530% and 242% throughput speedups compared to state-of-the-art pure and hybrid parallelism methods respectively. Figure <ref> shows the system overview of , which takes the model and hardware environment as inputs, and comprises three primary modules: search space construction (Section <ref>), parallelism optimization framework (Section <ref>), and cost estimator (Section <ref>). In Section <ref>, we introduce the search space construction of with decision-tree-based decomposition. In Section <ref>, we propose our parallelism optimization framework, which leverages dynamic programming search and workload balance adjustment techniques to iteratively refine the optimal parallelism strategy. In Section <ref>, we provide a cost estimator to estimate the execution cost and memory consumption efficiently and accurately. We also provide implementation details in Section <ref> and comprehensive experimental results in Section <ref>. § PRELIMINARY §.§ Transformer Models Transformers are first proposed to solve sequence modeling and transduction problems such as language modeling and machine translation <cit.>. The self-attention and point-wise feed-forward modules are the basic components in each Transformer layer. Most operations are dense algebras like matrix multiplications, resulting in huge computation costs and memory consumption. Transformers in NLP. Different manners of using Transformer layers in NLP incur three mainly Transformer architectures, including encoder-only (for text classification, e.g., BERT and RoBERTa <cit.>), decoder-only (for text generation, e.g., GPT-2 and Transformer-XL <cit.>), and encoder-decoder (for sequence-to-sequence tasks, e.g., T5 and BART <cit.>). They have similar basic model components and some slight differences on the structures. For example, the decoder has an additional self-attention layer compared to the encoder. What's more, the encoder-decoder architecture combines encoders and decoders symmetrically (i.e., the same number of layers) together. These differences bring some distinct system workload characteristics in both computation and memory. Transformers in CV. Transformers are also becoming increasingly attractive in computer vision areas. Vision Transformer (ViT) first replaces the tokens in languages with patches in images and the patches are fed into the encoder for the image classification task. Standard ViTs have a fixed number of patches and the same hidden dimension across different layers. Swin Transformer proposes a multi-stage hierarchical architecture with a shifted window-based attention to encode multi-scale patches. However, such multi-scale architectures also cause uneven computation and memory across layers. §.§ Parallelism in Distributed Training Data parallelism. Data parallelism approaches are widely used to scale up the distributed training for large input datasets. It refers to distributing the data samples across multiple workers to compute and synchronize the model updates (e.g., gradients). Each worker should maintain a replica of the model which implies that the model should be fit into the device memory. To alleviate the redundant memory consumption, DeepSpeed ZeRO <cit.> (also named FSDP in FairScale <cit.>) has been proposed to partition the model states instead of replicating them. It is similar to model parallelism but still follows the data parallelism computation process except involving additional communications to share the model states. Model parallelism. Model parallelism divides the model into multiple parts and each worker is only responsible for the computation of the partial model. Due to the complexity of DL model architectures, a variety of model parallelism approaches have been proposed with different model partition techniques. There are mainly two kinds of paradigms commonly used for large-scale Transformers training, including distributed tensor parallelism (TP) and layer-wise pipeline parallelism (PP). For example, Megatron-LM <cit.> uses TP, which partitions the feed-forward and self-attention modules in Transformers to multiple devices and inserts communication operations (e.g., All-Reduce) to guarantee consistent results. GPipe <cit.> first proposes PP, treats each model as a sequence of layers and partitions the model into multiple composite layers across the devices. The workers are organized as a pipeline and transfer intermediate results at the partition boundaries between neighboring partitions. It further splits the mini-batch into smaller micro-batches to reduce the bubbles (i.e., idle time). PipeDream <cit.> and 1F1B-Flush <cit.> (also referred to as PipeDream-Flush) are also popular schedules for PP execution. In , we default to 1F1B-Flush for its advantages of synchronous weight updates and demonstrating the same theoretical bubble rate as GPipe, while being much more memory-efficient. However, 1F1B-Flush causes distinct memory cost across different PP stages, where shallower stages consumes more memory. Such memory workload imbalance can potentially impede system performance, necessitating careful optimization of PP workload balance. Activation checkpointing. Activation checkpointing (CKPT) is a commonly used technique in large-scale model training, which trades off computation for memory overhead. In the standard training procedure, all intermediate activations computed in forward propagation must be stored for gradient computation, which can be quite memory-intensive for large models. To alleviate the memory overheads, activation checkpointing divides the model into segments and only stores the input activations of each segment during forward pass, discarding the other intermediate results. As a trade-off, these discarded results needs to be recomputed during backward propagation. Compared to other parallelisms that distribute the memory burden across physical devices, activation checkpointing distributes the memory burden across time dimension by delaying the memory consumption of most layers' intermediate activations to the backward pass with lower peak memory pressure. Therefore, we treat activation checkpointing as a special parallelism dimension that can be combined with other parallelisms naturally. Automatic parallelism. Recent approaches propose to integrate both data and model parallelism and search for better distributed training strategies. For example, FlexFlow, OptCNN, Tofu and TensorOpt consider both tensor parallelism and data parallelism. PipeDream and DAPPLE extend pipeline parallelism and enable data parallelism to replicate each pipeline stage. However, these approaches only explore the combination of data parallelism and at most one single model parallelism dimension. Such limited decision spaces cannot generate efficient enough parallelization plan for many workloads. In fact, industrial companies have taken great efforts to explore better parallelism combinations when training large Transformers on their clusters, such as Turing-NLG <cit.> from Microsoft and GPT-3 <cit.> from OpenAI. These evidences suggest that it is necessary to design an automatic parallelization system covering as many parallelism decisions as possible, without relying on strong system tuning experience from human experts. §.§ Performance of Pipeline Parallelism Compared to other parallelisms such as DP, SDP, and TP, PP has much less communication overhead, as it only needs to transfer the boundary tensors. The system performance of PP depends primarily on pipeline bubbles (i.e., pipeline idle time). Specifically, the main factor impacting pipeline bubbles is the pipeline scheduling plan, which also affects the memory efficiency of pipeline parallelism. Besides, the workload balance of the pipeline also affects its system efficiency, where the model stage with the highest execution cost or the highest memory consumption usually becomes the bottleneck. GPipe, Pipedream, and 1F1B-flush are popular pipeline scheduling plans. To reduce the pipeline bubbles, GPipe divides the minibatch into multiple micro-batches, allowing each device to calculate different micro-batches in parallel. Gpipe periodically flushes the pipeline to ensure synchronous weight updates over all devices, so that all workers use the same version of model parameters. Contrary to the strict optimizer semantic of GPipe, Pipedream allows asynchronous parameter updates, and uses the 1F1B scheduling to reduce the pipeline's bubbles. Besides, Pipedream introduces weight stashing mechanisms to deal with the inconsistency of the weight version, which brings higher memory overhead. 1F1B-flush combines the periodic pipeline flush in GPipe and the 1F1B pipeline scheduling in Pipedream. The former retains the synchronous weight update, and guarantees model convergence performance, while the latter reduces memory consumption. To ensure the model convergence performance, we focus on synchronous pipeline parallelisms, namely GPipe and 1F1B-flush. Given an evenly partitioned pipeline, the theoretical bubble rate of GPipe and 1F1B-flush is P-1/P+m-1, where P is the pipeline depth, i.e., the number of pipeline stages, and m is the number of micro-batches. As for the memory consumption, compared to GPipe, 1F1B-flush typically has a lower memory consumption for its forward activations, which equals to min(P-id+1/m,1) times that of GPipe, where id is the stage ID (1 ≤ id ≤ P). Note that 1F1B scheduling results in distinct memory consumption of activations for different pipeline stages, where deeper stages have lower consumption. For example, for a 4-way 1F1B-flush pipeline with m=8, the first stage stores 4/8 of the activations, while the last stage only need to store 1/8, causing memory workload imbalance. We use 1F1B-flush as our default pipeline in , as it has the same bubble rate as GPipe but is more memory efficient. However, the memory workload imbalance caused by 1F1B scheduling may hinder the system performance, providing us with an opportunity to optimize the workload balance. § SEARCH SPACE CONSTRUCTION The goal of is to automatically search within the composite parallelism space and generate the optimal parallelization plan for the given Transformer model and the distributed environment. The key challenge comes from the large search space when considering multiple parallelism strategies and making fine-grained decisions for the model parameters. In this section, we first introduce the search space of and propose our decision-tree-based decomposition to explore the search space more efficiently. §.§ Search Space Analysis We first analyze the overhead of training Transformer models, and take an example environment with two GPUs to better illustrate the large search space, optimization target, and necessary constraints. Then we extend the problem to multi-GPU cases. §.§.§ Overhead Analysis A Transformer model can be treated as a sequence of L layers, and each layer L_i contains a set of model parameters 𝐰_i, along with their corresponding parameter gradients and optimizer states, and the combination of these three is referred to as the model states 𝐦𝐬_i. Due to the back propagation, the forward computation results (i.e., activations) 𝐟_i should be kept inside the device memory. Specifically, the forward activations 𝐟_i include boundary activations 𝐛𝐧𝐝_i, which are the input tensors of each layer, and the intermediate activations 𝐢𝐧𝐭_i, which is the intermediate results inside of each layer, and it depends on the parallelism strategy that which part of 𝐟_i requires stashing. Besides 𝐟_i, the calculation of gradients during back propagation may require extra backward activations 𝐛_i. The problem is to select the optimal parallelism strategy for each layer individually from a large search space, which is a composition of DP, SDP, PP, TP and CKPT. As illustrated in Figure <ref>, all these parallelism strategies could split the computation workloads into multiple physical devices or distribute memory burden across time dimension, but they have distinct memory consumption and communication overheads, finally leading to different system efficiency. The overall computation overhead and communication overhead of the Transformer model are the summation of the layer overhead, while the overall memory consumption E_all is calculated by formula <ref>, where c(·) is the memory cost function. E_all=max_i=1^L{Σ_k=1^i c(𝐟_k) + c(𝐛_i) +Σ_k=1^L c(𝐦𝐬_i)} Specifically, when conducting back propagation of layer i, the peak memory cost is the summation cost of activation 𝐟_1, ..., 𝐟_i and 𝐛_i, as well as model states 𝐦𝐬_1, ..., 𝐦𝐬_L. And E_all takes the maximum of the peak memory cost of each layer. §.§.§ Two-GPU Example. Considering for a single layer L_i in the model, we analyze its costs under different parallelism strategies on two GPUs as follows. Data parallelism. In DP, each GPU has a model replica and half of the input data samples. Since the size of activations is proportional to the number of data samples, each GPU only needs to store half of the forward activations. After the backward computation, the GPUs should synchronize their gradients (i.e., ) before updating the model, which has the same size as model parameters. Sharded Data parallelism. In SDP, each GPU has half of model parameters and half of the input data samples. However, it requires two times to collect the sharded model parameters for forward and backward computation and once to update gradients. Since an operation is equivalent to the combination of once and once , the communication cost of SDP is 1.5× larger than DP. Pipeline parallelism. In PP, the layer L_i could be placed on either GPU 0 or GPU 1, resulting in two possible memory costs: (1, 0) and (0, 1). The communication cost is mainly determined by whether the neighboring layers are on the same device. In practice, each device may have a sub-sequence of layers and only the activations from the boundary layers should be transferred. The efficiency of PP is also affected by the pipeline bubbles (i.e., idle time), which can be reduced by splitting micro-batches. Besides, the workload balance also affects its system efficiency, where the model stage with the highest execution cost or the highest memory consumption usually becomes the bottleneck. Tensor parallelism. In TP, each GPU also has half of model parameters. Unlike SDP, TP allows each device to perform the forward computation (e.g., matrix multiplications and self-attentions) with half model. It requires to synchronize the activations with the operations for both forward and backward computation. Due to the intermediate synchronization, TP has some additional replications of the activations. Activation checkpointing. Unlike other parallelisms that split computation and memory workload over physical devices, CKPT distributes the memory overhead across the time dimension, and brings no communication cost but extra computation cost. It enables the release of intermediate results 𝐢𝐧𝐭_i during the forward pass and only stores boundary activations 𝐛𝐧𝐝_i, which cuts down the forward memory consumption, but requires additional forward computation to recompute 𝐢𝐧𝐭_i during backward propagation, and brings backward memory consumption as well. Spefically, for layers applying CKPT, c(𝐟_i)=c(𝐛𝐧𝐝_i) and c(𝐛_i) = c(𝐢𝐧𝐭_i), while for others not applying CKPT, c(𝐟_i)=c(𝐛𝐧𝐝_i)+c(𝐢𝐧𝐭_i) and c(𝐛_i) = 0. §.§.§ Multi-GPU Extension. When extending to multi-GPU, the problem becomes more complicated since even a single layer could have a variety of hybrid parallelism strategies by integrating multiple parallelism paradigms. For example, for two nodes with 4 GPUs in total, it's easy to integrate 2-way TP within a node and 2-way PP across nodes. Alternatively, using 2-way PP within a node and 2-way DP across nodes is also possible. Besides, CKPT can also be combined with other parallelisms, and may bring additional communication overhead , as it requires an extra forward pass before calculating gradients. For example, when combined with TP, CKPT requires extra operations during recomputation. In the 4-GPU example, it's possible to use 2-way TP and 2-way PP without CKPT, and also valid to use 2-way TP and 2-way PP with CKPT, where CKPT trades-off between memory consumption and time cost. Therefore, CKPT doubles the size of search space further. Moreover, when scaling to 8 GPUs or even more GPUs, there exist hundreds of candidate strategies for a single layer. For a given model, the entire search space is much larger and exponentially growing with the number of layers. §.§ Decision-tree-based Search Space Decomposition Considering for such a large search space, it is impossible to brute-force search all the combinations of these parallelism paradigms within a feasible time budget. Therefore, to explore the search space more efficiently, we introduce the following key intuitions from empirical observations or theoretical analysis. Takeaway #1. PP prefers to be applied across device “islands”. Each island is a set of devices with higher-bandwidth interconnects (e.g., NVLink, PCIe) and should be in charge of a stage in the pipeline. Compared to other parallelisms, PP has much less communication overheads especially for large models. Because each stage typically has multiple layers but only requires to communicate the activations from the boundary layers. It is sensible to perform PP partition first across slower inter-island links (e.g., QPI, Ethernet). Takeaway #2. Suppose the devices are homogeneous, these parallelism strategies prefer to divide the devices into groups with equal size. For example, a 2-way DP on 4 GPUs means two 2-GPU groups, rather than a single GPU and one 3-GPU group. Consequently, the optimal hybrid parallelism strategy on one group should be also consistent with those of the other groups. Note that, it could fail for PP since the model partitions may have different computation operations, resulting in different optimal parallelism strategies. Based on the above important intuitions, we design a decision-tree to decompose the search space and represent the candidate hybrid parallelism strategies. We next present the details of constructing the decision-tree. Insights Underpinning Decision-tree. We find that most existing automatic parallelism approaches only involve two parallelism dimensions (e.g., OptCNN and FlexFlow), which is easily to enumerate all possible parallelism configurations for a single layer. After involving pipeline parallelism (e.g., PipeDream), they often partition the model into different stages first and each stage is then assigned to a subset of devices. Such kind of observation suggests us to explore the hierarchical search space by utilizing a decision-tree. Another motivation is that we need the tree structure to capture the orders when applying parallelism even inside a stage. Due to the device topology and hierarchical bandwidth, it is necessary to consider the permutations of hybrid strategies since they may have different communication efficiencies. Decision-tree construction. Given a Transformer model, first applies PP to partition the model into multiple stages. In the meanwhile, the devices are also divided into multiple groups with the same size. As suggested by Takeaway #1, it prefers grouping between devices with higher bandwidth. For an 8-GPU scenario, will attempt 1/2/4/8-way PP respectively. Suppose the model is partitioned evenly by PP, based on Takeaway #2, the size of the corresponding device group should be 8/4/2/1 respectively after applying PP, which directly determines the number of leaf nodes in our decision-trees. As shown in Figure <ref>, given the number of leaf nodes, there might exist multiple possible tree structures. We define the decision-tree construction rules as follows: * Each decision-tree denotes a sub-search-space and its height represents the number of available parallelism paradigms including DP, TP, PP and SDP. * Any one of DP, TP, PP and SDP cannot be applied repeatedly in different levels of a decision-tree. * The degree of non-leaf nodes should be selected from {2,4,8,⋯}. * Each decision-tree can be decided to apply CKPT (S_i^') or not to apply CKPT (S_i). With the above rules, the constructed trees could represent the arbitrary combinations of these parallelisms in a non-overlap manner. The guidance from Takeaway #1 and #2 significantly helps to avoid the unnecessary and inefficient parallelism combinations. For a single layer with 8 GPUs, it produces 68 different hybrid parallelism strategies , which reduces the original combinational search space including hundreds of strategies by one order of magnitude. It could be further optimized as follows: Takeaway #3. Using SDP is always better than integrating DP and SDP. We make a comparison with N-way DP, N-way SDP, and the combination of N_1-way DP and N_2-way SDP (N_1× N_2=N). First, SDP always has fewer model parameters than DP+SDP since N_2≤ N. Second, integrating DP and SDP will lead to two rounds of communication including 2(N_1-1)/N_1 for N_1-way DP and 3(N_2-1)/N_2 for N_2-way SDP. Given N_1× N_2=N, we can prove that the minimum value of its cost is still larger than that of pure SDP. Therefore, we exclude such combinations from our search space. After applying Takeaway #3, we could further reduce the number of candidate strategies to 44 for a single layer with 8-GPUs. § PARALLELISM OPTIMIZATION FRAMEWORK The target of is to generate the optimal hybrid parallelism strategy for the input DL model with the given devices. Problem Formulation. We define the optimization problem in as follows. Given model M (with L layers) and N devices (with memory capacity of E), the object is to find the largest throughput Tpt and return the corresponding parallelism strategy, which is made up of the fine-grained layer-level parallelism strategies. According to the decision-tree-based decomposition proposed in section <ref>, PP is first applied to partition the model into multiple stages, and to divide the devices into multiple device groups. Subsequently, the partitioned model stages are assigned to the corresponding device groups, and optimization of other parallelism dimensions is conducted for each model stage. It is evident that pipeline partition plans influence the workload on each device group, thereby affecting the optimization outcome. Therefore, we address the optimization problem in two folds: Question 1: Given an ideally balanced pipeline partition plan, how to conduct parallelism optimization for each model stage and find the optimal hybrid parallelism strategies?(Section <ref>, -Base) Question 2: How to find an optimal pipeline partition plan, which balances the pipeline workload (including both memory and computation) and maximizes the system throughput? (Section <ref>, ) §.§ Basic Parallelism Optimization §.§.§ Basic Optimization Workflow We first propose -Base to solve Question 1 and the optimization algorithm workflow is illustrated in Algorithm <ref>. Basically, the system throughput equals to the ratio between the batch size and the iteration time (i.e., per-batch execution time). Tuning the batch size could lead to distinct memory consumption, computation costs and communication overheads. Scaling the model training with hybrid parallelism strategies could reduce the memory consumption and enlarge the batch size. But it could also bring significant communication overheads. In other words, the highest training throughput does not have to come with the largest batch size. Therefore, in -Base, we gradually increase the explored batch size (line 2) and keep tracking the maximum system throughput until exceeding the device memory for all possible parallelism strategies (lines 11-15). Given a candidate batch size B, -Base then utilizes Takeaway #1 to apply PP at first. We suppose the total number of devices N is the power of two (e.g., 4, 8, 16), which is common in dedicated GPU training clusters. So we only explore the 2-th powered PP degrees (line 4). In default, for each unique PP degree P, we assume its pipeline partition plan is given (line 5) and represented as an array 𝐩, where each item 𝐩_i denotes the number of model layers for the i-th pipeline stage. For example, 𝐩=[12,12] indicates that a 24-layer model is partitioned into two stages with 12 layers each. Note that, the devices are evenly divided into P groups and the model is also partitioned into P stages (line 6), guided by several load balancing factors (e.g., the number of layers/parameters, the maximum memory usage, and the execution time). We leave these further discussion to Section <ref>. Then we conduct the function to optimize parallel strategies (line 7). In (line 17), we first initialize the micro-batch number m (line 18) and calculate the micro-batch size B_m. For each model stage, we construct the decision tree that represents the candidate hybrid parallelism strategies composed of DP, SDP, TP, and CKPT. After obtaining the strategies set S, we further determine the parallelization plan for each layer in M_i under the limited device memory budget E with the dynamic programming search algorithm (Section <ref>). The search results are used to calculate the whole pipeline execution time (line 27) based on the cost model in Section <ref>. §.§.§ Dynamic Programming Search For a given model stage including L layers, we suppose the function C(L, E) represents the total execution time of these L layers under the device memory budget E. When applying any parallelism strategy S_j∈ S, we define c(L, S_j) to denote the execution time, O_f(L, S_j) and O_b(L, S_j) to represent the memory consumption of forward activations 𝐟_L and backward activations 𝐛_L, and O_ms(L, S_j) to denote the memory consumption of model states 𝐦𝐬_L. The overall memory consumption of L layers E_all(L) is calculated by Eq. <ref>. We also calculate the total forward memory consumption of L layers E_f(L) by Eq. <ref>, which includes the consumption of forward activations and model states. It is obvious that E_f(L) ≤ E_all(L). E_all(L) =max_i=1^L{ Σ_k=1^i O_f(k, S_j_k) + O_b(i, S_j_k) +Σ_k=1^L O_ms(k, S_j_k) } 1E_f(L)=Σ_i=1^L{O_f(i, S_j_i)+O_ms(i, S_j_i)} We aim to optimize C(L, E) given the memory constraint E_all(L) ≤ E using dynamic programming search. However, we find that due to the maximum operation in Eq. <ref>, two memory states need to be stored during state transition, which leads to a quadratic complexity in terms of memory constraint E (refer to Appendix <ref> for details). This is unacceptable in practice. To ensure the linear complexity with respect to memory, we decouple the forward memory E_f(L) from E_all(L), and first optimize C(L, E_fwd) with the forward memory constraint E_f(L) ≤ E_fwd, where E_fwd≤ E, and finally check the validity of the overall memory, i.e., E_all(L)≤ E. Firstly, we discuss how to optimize C(L, E_fwd) with constraint E_f(L) ≤ E_fwd using dynamic programming. Since the problem follows the optimal substructure property (refer to Appendix <ref> for detailed proof), we start with C(0, ·)=0 and C(·, 0)=∞, then we can derive the following state transition formula: 1.0!C(L, E_fwd) = min_S_j∈ S{C(L-1, E_fwd-O_f(L, S_j)-O_ms(L, S_j)) + c(L, S_j) + R(L, S_i, S_j) } where R(L, S_i, S_j) is the transformation cost between the L-th layer applying S_j and its former layer applying S_i. If two neighboring layers have different parallelism strategies, the former layer's output should be transformed to the required data layout to facilitate the next layer's parallelism. For example, if the former layer uses the combination between 2-way DP and 2-way TP and the current layer attempts to use 4-way DP, a transformation step is necessary to prepare the full model replica and the 1/4 forward activation at each device for the current layer. During the state transition process, if the forward memory usage exceeds the budget E_fwd, the cost function C should return infinity. Secondly, based on the state transition formula in Eq. <ref>, we introduce our overall dynamic programming algorithm (the detailed algorithm is given in Appendix <ref>), which ensures E_all(L) ≤ E. Given a device memory budget E, we use E_fwd (E_fwd≤ E) as the forward memory budget, and the rest part E-E_fwd is spared for the backward peak memory E_b(L)=E_all(L)-E_f(L). To maximize the memory utility, we gradually increase and traverse E_fwd to optimize C(L, E_fwd) using Eq. <ref>, and then check the validity of the overall memory with the searched strategies, i.e., E_all(L)≤ E. Finally, we find the largest forward memory budget E_fwd^opt with its searched strategies satisfying E_all(L)≤ E, and the optimized throughput C(L, E_fwd^opt) is the final output. §.§.§ Complexity Analysis The proposed dynamic programming search formula in Eq. <ref> has a computation complexity of 𝒪(LE|S|). As we can see, the size of the single layer's decision space is crucial for the entire complexity and our proposed decision-tree significantly reduces the space and makes it feasible. The number of layers L and the memory budget E also affect the complexity. For extreme cases with thousands of layers or huge memory capacity, we can further reduce the complexity by taking coarse-grained explorations, e.g., fusing multiple layers, using large memory granularity. §.§ Bi-objective Optimization of Workload Balance We assume an ideally balanced pipeline partition in -Base which may not be true in practice. In fact, perfect workload balance could be difficult to achieve due to several reasons. For example, the 1F1B-Flush pipeline scheduling brings distinct memory consumption for different model stages (Section <ref>). Besides, some model structures (e.g., encoder-decoder) are naturally heterogeneous and hardly to be evenly partitioned based on any single empirical guideline. For instance, the decoder usually has much shorter sequence length than the encoder, leading to workload imbalance of both memory and execution time. Therefore, to address Question 2, we further propose -BMW that considers the Balancing trade-off between Memory Workload and computation workload. §.§.§ Pipeline Workload Balance When training a model M in PP (i.e., no matter using GPipe or 1F1B-flush) with mini-batch size B and memory budget E for each worker, the overall time cost C(M,B) and peak memory cost O(M,B) can be represented as follows: 0.9C(M,B) = (m - 1) * max_i=1^PC(M_i,B_m)+ ∑_i=1^PC(M_i,B_m) O(M,B) = max_i=1^PO(M_i,B_m) ≤ E where C(M_i,B_m) and O(M_i,B_m) denotes the execution time and the memory consumption of the i^th model stage M_i under micro-batch size B_m (B_m = B / m). As can be seen, the overall pipeline execution time largely depends on the slowest worker, and the memory bottleneck lies in the worker with the heaviest memory workload. To quantify the workload imbalance, we define the following time and memory balance degrees: 0.9α_t=1-max_i=1^P C(M_i,B_m)∑_i=1^P C(M_i,B_m), α_m=1-max_i=1^P O(M_i,B_m)∑_i=1^P O(M_i,B_m), which satisfy that 0 ≤α_t, α_m ≤ 1-1/P. It's easy to observe that a larger balance degree implies lower total execution time or peak memory usage. Specially, α_t=1-1/P or α_m=1-1/P indicate a perfect workload balance of time or memory. Figure <ref> shows two examples of 4-way 1F1B-Flush PP, implementing both memory- and time-balanced partition. The time-balanced pipeline exhibits similar execution costs across all stages, resulting in a quite balanced α_t, which is close to 1-1/4=0.75 for both models. Nonetheless, its memory consumption is seriously imbalanced due to the influence of 1F1B scheduling, leading to a relatively low α_m, where deeper stages consuming considerably less memory than shallower ones. In contrast, the memory-balanced pipeline demonstrates nearly uniform memory consumption across all stages, resulting in a substantial α_m. But it allocates more layers to deeper stages for memory balance, increase their execution costs, and reduces α_t. Regarding system efficiency, a greater α_t corresponds to increased throughput for time-balanced pipelines. However, under a limited memory budget (e.g., 16GB), some stages of time-balanced pipelines are prone to memory exhaustion, making it unfeasible. Inversely, while the memory-balanced pipelines yield lower system throughput, their higher α_m ensures balanced memory budget utilization, thereby facilitating the training process. Hence, our objective is to devise a pipeline partition plan that optimizes both α_m and α_t, to ensure workload balance and enhance system efficiency. Based on these two balance degrees, we can further define 𝐩_t and 𝐩_m as the extremely balanced time and memory-balanced partition plans, satisfying α_t(𝐩_t)≥α_t(𝐩), α_m(𝐩_m)≥α_m(𝐩) for any partition plan 𝐩. We can infer that the optimal partition 𝐩^* should guarantee: α_t(𝐩_m) ≤ α_t(𝐩^*) ≤α_t(𝐩_t), α_m(𝐩_t) ≤ α_m(𝐩^*) ≤α_m(𝐩_m). Because for any partition 𝐩, if α_t(𝐩) < α_t(𝐩_m), we'd rather choose memory-balanced partition 𝐩_m as it has both better memory and speed balance, and if α_m(𝐩) < α_t(𝐩_t), we prefer 𝐩_t for the same reason. Taking the optimal partition in Figure <ref> as an example, it effectively mitigates the memory exhaustion observed in time-balanced pipelines (i.e., with median values of α_t and α_m) and simultaneously boosts the throughput of memory-balanced pipelines by as much as 19.2%. However, it is unlikely to find 𝐩^* by optimizing these two co-related balance degrees separately. §.§.§ Bi-objective Optimization Workflow To achieve balanced workloads across pipeline stages and maximize the system performance, we propose the bi-objective optimization workflow of in Algorithm <ref>. It basically follows the workflow of Algorithm <ref> and reuses the function. The key difference is that we adjust the pipeline partition iteratively to optimize the workload balance simultaneously. For each iteration, we start from a pipeline partition 𝐩 (line 10) that is initialized (line 7) by the memory-balanced partition 𝐩_m. We then conduct to optimize the parallel strategies 𝒮 based on 𝐩. Since the update of 𝒮 might change the pipeline workload balance, we need to adjust the pipeline partition 𝐩 correspondingly. Here we provide a heuristic partition adjustment method that greedily cuts down the workload of the slowest pipeline stage, and adjust the pipeline from memory-balanced to time-balanced. In particular, given an input partition plan 𝐩, we first find the model stage with the maximum time cost C_max. Then we move its boundary layer to its adjacent stages and obtains a new partition 𝐩'. The validation function (line 14) prevents the opportunistic adjustment from being harmful to the overall system efficiency by adding limitations on 𝐩', including 1) the time costs of model stages should be no more than the previous maximum cost C_max, 2) the memory costs of model stages should not exceed memory budget, and 3) the memory costs of model stages should not surpass the maximum stage memory cost under partition 𝐩_t. If all criteria are met, it can be demonstrated that 𝐩' satisfies condition <ref> and α_t(𝐩') ≥α_t(𝐩), indicating 𝐩' is superior to the prior partition 𝐩 in time balance (refer to Appendix <ref> for more details). Then partition 𝐩' is considered a feasible intermediate partition and is pushed to queue Q for subsequent search iterations. § COST ESTIMATOR provides a cost estimator to estimate the computation and communication costs and memory consumption during the optimization process. Current methods primarily use profiling or simulating for estimation. In , we take advantages from both sides and design a cost model to make the estimations cheap, efficient and accurate. Specifically, for the memory consumption, we use the shape of a tensor and its data type to calculate its memory. For the computation time, we suppose it could be estimated by the product of the batch size and the per-sample computation time. The latter could be measured by profiling the real layer execution time on a single device. Note that, the Transformers are mainly composed by matrix multiplication operations, so the backward computation is usually twice of the forward computation. For the communication time, we can approximate it by using the size of tensor to be transferred divided by the inter-device connection's bandwidth. With the above computation and communication cost estimations, c(l,s) (i.e., the cost of a given layer l using a specific parallelism strategy s∈ S) could be calculated by simulating the execution process. It consists of two steps, e.g., forward and backward computation. The simulation for the forward computation is simple and directly sums up the computation and communication costs (i.e., in SDP and in TP). However, during the backward process, DP and SDP enable the computation and communication overlapping, which may bring estimation errors. A typical choice is to take the maximum value from the computation and communication costs (e.g., PipeDream <cit.>). Existing automatic parallelism approaches barely notice that modern GPUs simultaneously performing compute kernels and communication primitives (e.g., NCCL <cit.>) lead to slowdown for both sides. The performance degradation is mainly from the resource contention of thread warps in GPU streaming multiprocessors. We find that such contention could slow down the computation and communication by 1.3× in our evaluations, which is consistent with some recent observations <cit.>. By considering the overlapping slowdown, makes more accurate estimations and better optimizations. Besides, for layers applying CKPT, we add an extra forward propagation cost on c(l, s) to simulate the recomputation overhead. By summing up the cost c(l,s) for each layer, we can calculate the cost of pipeline model stage M_i, C(M_i,B_m), where B_m is the micro-batch size. We then estimate the overall time cost of the model as follows: 0.92C(M,B,m) = (m - 1) * max_i=1^P(C(M_i,B_m)-C(M_i,0)) + ∑_i=1^P(C(M_i,B_m)) The workload balance is considered in Eq. <ref>, and we also consider the different execution time between the last micro-batch and others. Due to gradient accumulation, data parallelism requires gradient synchronization during back propagation of the last micro-batch, causing longer execution time for the last micro-batch than previous ones. Therefore, we use C(M_i,B_m) to simulate the cost of the last micro-batch, which contains the gradient communication cost, and use C(M_i,B_m)-C(M_i,0) to simulate that of the previous ones, where C(M_i,0) represents the pure cost of gradient communication and is removed from the total cost. In conclusion, our cost model is both efficient and accurate, which allows us to optimize the pipeline efficiency and promote the system performance. § IMPLEMENTATION is an automatic parallel training framework especially for Transformer models (open sourced at <cit.>), as a part of a novel distributed DL system Hetu <cit.>. We provide a simple and efficient interface to users by making a few lines' modifications on the PyTorch training programs <cit.>. Communication group. We implement all communication primitives with PyTorch NCCL functions. As supports complex hybrid parallelism strategies, there could exist many communication groups among the GPUs in the generated parallelization plan. To avoid the expensive NCCL groups construction overheads, maintains a global communication group pool which is created in advance and contains all groups that might be used. Transformation optimization. We propose an efficient Slice-Gather step to perform the transformations automatically between two neighboring layers with different parallelism strategies. Given the previous layer with strategy A and the current layer with strategy B, the main idea of Slice-Gather is to ensure the input activations for the current layer are placed on the devices according to the requirement of strategy B, which has been extensively studied <cit.>. There exists some special cases that the Slice-Gather step brings no communication costs (e.g., strategy A is 4-way TP and strategy is 4-way DP). will automatically recognize such cases and finish the transformation without any overheads. § EXPERIMENTS §.§ Experimental Setups In this section, we compare with 4 pure distributed parallelism strategies implemented by the state-of-the-art systems including PyTorch DDP <cit.> for DP, Megatron <cit.> for TP, PyTorch GPipe <cit.> for PP, and FairScale FSDP <cit.> (similar to DeepSpeed ZeRO Stage-3 <cit.>) for SDP. To help better understand the benefits of each technique in , we also provide several auxiliary baselines for further comparisons. Apart from -Base, we use to represent a variant of -Base that disables CKPT. Based on that, we further create (DP+TP) and (DP+PP) to verify the training efficiency of previous automatic parallelism approaches with limited parallelism dimensions (i.e., DP+TP and DP+PP). (1F1B+Bi-obj) is based on but enables bi-objective optimization (in other words, it can be treated as but disables CKPT). Specially, DeepSpeed 3D is an expert-designed baseline <cit.> integrating DP, TP, and PP globally. We select NLP models BERT, T5 as well as CV models ViT, Swin Transformer as our experimental models. The statistics of models are listed in Table <ref>. We select the Adam optimizer and use the English Wikipedia and ImageNet-1K as input datasets for them respectively. To further verify the performance of , we also select an imbalanced model for some experiments, T5-512/4, a variant of T5 model for question answering task SQuAD, where the sequence length is 512 for encoders and 4 for decoders. Due to the shorter sequence length, the decoders have much less memory consumption than the encoder, leading to the memory imbalance. Most experiments are evaluated on a single node equipped with 8 Nvidia RTX TITAN 24 GB GPUs using PCIe 3.0. For PP, we manually tune the number of micro-batches to minimize the bubbles and estimate its costs. All results are averaged over 100 iterations. §.§ End-to-End Comparison <ref> shows the overall system throughput results of different models under different strategies with different memory constraints, along with the corresponding batch size. As we can see, under different model scales and memory budgets, always outperforms all baselines in multiple regards. For instance, on ViT, promotes the overall system throughput by up to 493% compared with pure parallelism strategies, and achieves a maximum of 173% acceleration compared with hybrid strategies, including automatic parallelism with limited dimensions and expert-designed 3D parallelism. Similarly, on the other three models, achieves a maximum of 407%-530% and 150%-242% compared with single and hybrid strategies respectively. Then, we look carefully into the effectiveness of each technique in . In comparison to , we observe that by integrating CKPT into search space, -Base amplifies the overall system throughput by up to an impressive 109%. Such enhancement is attributed to CKPT's memory efficiency, which facilitates -Base to achieve larger training batch size (e.g., a batch size of 160 for BERT-Huge-32 under 20 GB memory constraint), thereby optimizing throughput. Furthermore, (1F1B+Bi-obj), which enables bi-objective optimization and strikes a balance between memory and computational workload, bolsters the system throughput by up to 44% compared to . Benefiting from these two techniques, manages to achieve state-of-the-art performance across all models. We can also find that different models may have different preferences on the parallelism strategies. For example, under different memory budgets, BERT almost always prefers DP+PP among all baselines. Similar observations could be also found on some cases of T5. For ViT and Swin, the preferences change to SDP when increasing the memory budgets. The reason mainly comes from that NLP models have larger activation while CV models have larger model parameters, thus the latter could benefit more from sharding the model parameters across the GPUs. Here DeepSpeed 3D uses an officially suggested strategy <cit.> combining 2-way DP/TP/PP together. Such a fixed strategy outperforms three pure parallelisms but fails to beat SDP in most cases. Another interesting finding is that the hybrid parallelisms like DP+TP and DP+PP may perform worse than pure SDP (e.g., ViT-Huge-32 with 8G, Swin-Huge-32 with 16G). It further indicates that existing automatic parallelism approaches focusing on limited model parallelism dimensions are suffering from these limitations. §.§ Estimation Performance <ref> demonstrates the cost estimation errors with and without considering the overlapping slowdown. It can be observed that our estimation results are very close to the real execution costs for all experimental models. The average prediction error is less than 5%. However, when ignoring the slowdown, the estimations become obviously lower, resulting in an average prediction error of more than 15%, which compromises the efficiency of the generated execution strategy. §.§ Optimization Efficiency The efficiency of our dynamic programming search algorithm varies according to different number of model layers, overall strategies and memory constraints. As shown in <ref> (a), when the number of model layers and memory limit increase linearly, the search time of our algorithm increases linearly as excepted, only hundreds of seconds are required to generate the optimal execution plan, which is acceptable and negligible relative to the extremely long model training time. <ref> (b) demonstrates the impact of total parallelism dimensions on the search time, both DP+TP and DP+PP have a total of 4 alternate strategies on 8 GPUs, while and has 44 and 22 overall candidates. In this case, the search time of DP+TP and DP+PP is consistent and much less than that of and . §.§ Scalability Study We conduct further comparisons on large clusters. We first extend our experiments to 16 Nvidia RTX TITAN GPUs over two servers connected by 100 Gb InfiniBand network (referred to as low-performance cluster), as well as 16 Nvidia A100 GPUs with NVLink over two servers connected by 100Gb InfiniBand network (referred to as high-performance cluster). Table <ref> illustrates the results on BERT, ViT and T5-512/4 models. Not surprisingly, achieves the best performance with different memory budgets on both two clusters. On low-performance cluster, compared with the results on 8 GPUs, and the hybrid parallelism methods could obtain more than 2× speedups for many cases. For example, enlarges the batch size from 520 to 960 for ViT-Huge-32 under 16 GPUs with 16 GB memory, and the throughput increases from 89.22 to 179.89 samples per second. The 2.02× speedup comes from the flexible fine-grained layer-level parallelism strategy, which helps to reduce the communication costs and improve the training efficiency. As can be seen, outperforms all baselines across both two clusters as well as different models (including imbalanced model T5-512/4). This highlights 's adaptive capability when handling different clusters and diverse model workloads. We then extend to an industrial GPU cluster including 64 Nvidia A100 GPUs, where each server has 8 GPUs equipped with NVLink and the servers are connected by 100 Gb InfiniBand network. Since the environment scale is significantly larger than before, we also increase the model sizes to 10 billion parameters (i.e., BERT-xHuge and ViT-xHuge, details are in Table <ref>). As we can see in Table <ref>, even on such a large GPUs cluster, still outperforms these baseline methods. Besides, based on our observations, the search time costs do not exponentially grow (i.e., 2.2× and 9.2× for 16 GPUs and 64 GPUs respectively compared with 8 GPUs), which is still tolerable. §.§ Ablation Study To delve deeper into the performance of bi-objective optimization of workload balance, we conduct experiments on different pipeline partitions. The results of BERT and T5-512/4 on high-performance cluster (16 A100) are shown in <ref>. The memory-balanced partition can evidently support the training of larger batch sizes for the provided models, enabling a batch size of 128 for T5-512/4-32 under 8GB memory, in contrast to the limit of 48 in time-balanced partition. On the other hand, the time-balanced partition encourages a more uniform workload distribution across varying pipeline stages. As an example, a balanced partition of [16,16] is accomplished for the BERT-Huge-32, which has homogeneous layers. Indeed, bi-objective optimization harnesses the advantages of both memory- and time-balanced partition, striking an optimal balance between memory and time cost to yield superior performance. In the training of BERT-Huge-48 and T5-512/4-48 under a memory constraint of 16G, the pipeline partitions are tuned to intermediary configurations of [22, 26] and [11, 37], while the training batch sizes are optimized as 144 and 192. These strategic adaptations culminate in marked improvements in system throughput, with surges of up to 19% and 12%, thus underscoring the effectiveness of bi-objective optimization. §.§ Optimal Parallelism Plan We list some examples of the optimal parallelism plans suggested by in <ref>. For comparison, we choose different models (BERT-Huge-32, Swin-Huge-32, and T5-512/4-32), different GPU numbers (8 and 16), and different GPU clusters. In case A for BERT, provides an optimal strategy containing S_1^A, PP+DP+CKPT, and S_2^A, PP+SDP+CKPT, incorporating SDP and CKPT to reduce memory costs and enlarge the batch size as well as the throughput. In case B for Swin-Huge-32, the optimal plans given by is rather complex, as it has four different layers which have different strategy preference. In Swin Transformer, shallower layers have larger activation size and smaller parameter size. To reduce memory consumption and communication overhead, shallower layers prefer data parallel which splits input activations and communicates parameter gradients, while deeper layers prefer tensor parallel which splits model parameters and communicates activations. Therefore, by mixing PP+DP, PP+DP+CKPT and PP+TP+DP, maximizes the memory utilization as well as the system efficiency. To further analyze , in case C, we test T5-512/4-32 on 16 low-performance GPUs and 16 high-performance GPUs under the same memory budget. In T5-512/4, as the decoder has shorter sequence length than the encoder, the activation size of the decoder is much less than the encoder, while the parameter size of the decoder is larger than the encoder due to the extra attention block. Therefore, similar to Swin, on low-performance GPUs, prefers TP for the decoder to reduce communication volume, where S_5^C, PP+DP+TP, is recommended. However, on high-performance GPUs, the advantage of low communication volume is not significant due to the high communication bandwidth, and mixes PP+SDP+CKPT, PP+SDP and PP+DP, preferring using SDP to reduce the memory consumption. §.§ Optimal Parallelism Plan We list some examples of the optimal parallelism plans suggested by and . For comparison, we choose different models (BERT-Huge-32, Swin-Huge-32, and T5-512/4-32), different memory constraints (8 GB and 12 GB), different GPU numbers (8 and 16), and different GPU clusters. In case A, for BERT-Huge-32 with 8 GB memory, provides an optimal plan containing two strategies S_1^A, a combination of PP, TP and DP, and S_2^A, a combination of PP, TP, DP. Under the memory limitation of 12 GB in case B, gives a mixture of S_1^B, TP+DP, and S_2^B, TP+SDP. In case E, For the same model with 8 GB memory, provides an optimal strategy containing S_1^E, PP+DP+CKPT, and S_2^E, PP+SDP+CKPT. As we can see, both and incorporates SDP and thus reduces memory costs and enlarges the batch size as well as the throughput. Benefiting from the memory efficiency of CKPT and workload balance optimization, further enlarges the training batch size to 184 and promotes the throughput by 134% under 8 GB compared to . In case C, D and F, for Swin-Huge-32, the optimal plans given by and is rather complex, as it has four different layers which have different strategy preference. In Swin Transformer, shallower layers have larger activation size and smaller parameter size. To reduce memory consumption and communication overhead, shallower layers prefer data parallel which splits input activations and communicates parameter gradients, while deeper layers prefer tensor parallel which splits model parameters and communicates activations. Comparing case F with case C under 8 GB, we find that by mixing PP+DP+CKPT and PP+SDP+CKPT, enlarges the batch size to 360 and increases the throughput by 109%. To further analyze , in case G, we test T5-512/4-32 on 16 low-performance GPUs and 16 high-performance GPUs under the same memory budget. In T5-512/4, as the decoder has shorter sequence length than the encoder, the activation size of the decoder is much less than the encoder, while the parameter size of the decoder is larger than the encoder due to the extra attention block. Therefore, similar to Swin, on low-performance GPUs, prefers TP for the decoder to reduce communication volume, where S_5^G, PP+DP+TP, is recommended. However, on high-performance GPUs, the advantage of low communication volume is not significant due to the high communication bandwidth, and mixes PP+SDP+CKPT, PP+SDP and PP+DP, preferring using SDP to reduce the memory consumption. § CONCLUSION Large-scale Transformer training is becoming increasingly important due to its expensive training costs. Existing data and model parallelism approaches are suffering from the system efficiency problem. To address the problem, we presented , a novel automatic parallel Transformer training system over multiple GPUs. Through the carefully designed search space decomposition and exploration algorithm, significantly outperforms the state-of-the-art baselines on the training throughput. We hope the open source release of will facilitate the future research directions on more challenging scenarios, e.g., heterogeneous environments and large DL models with complex and dynamic structures. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (No. 61832001 and U22B2037) and PKU-Tencent joint research Lab. Bin Cui is the corresponding author. IEEEtran §.§ Dynamic Programming §.§.§ State Transition of the Memory Consumption To optimize the total execution time C(L, E) given the memory constraint E_all(L) ≤ E, the overall memory consumption E_all(L) needs to be transitioned from E_all(L-1) as follows: E_all(L) = max{E_all(L-1), E_f(L-1)+ O_f(L, S_j)+O_b(L, S_j)+O_ms(L, S_j)}. We find that due to the maximum operation in Eq. <ref>, the forward memory consumption E_f(L-1) also need to be stored for state transition, therefore the states of dynamic programming need to be designed as C(L, E_all, E_f). Such states lead to a quadratic complexity in terms of memory constraint E, which is unacceptable in practice. Therefore, we first optimize C(L, E_fwd) with the forward memory constraint E_f(L) ≤ E_fwd, where E_fwd≤ E, and finally check the validity of the overall memory, i.e., E_all(L)≤ E. The state transition of the forward memory consumption is quite simple: E_f(L)=E_f(L-1)+O_f(L, S_j)+O_ms(L, S_j). This only requires one memory state to be stored for state transition, resulting in a linear complexity in terms of memory constraint. §.§.§ Proof for Optimal Substructure Property Before applying the dynamic programming in Section <ref>, we first prove that the problem follows the optimal substructure property. To obtain the minimum execution time C(L, E_fwd), we clarify that the solution must contain the sub-problem solution C(L^', E_fwd^'), which represents the minimum execution time for the sub-model, i.e., first L^' layers (L^'≤ L), within a smaller forward memory budget E_fwd^' (E_fwd^'≤ E_fwd). This clarification holds because if the optimal solution C(L, E_fwd) does not contain a specific C(L^', E_fwd^'), we can always reduce the total execution time by replacing the sub-problem solution to C(L^', E_fwd^'). Due to the linear sequence model structure, the parallelization plan of the first L^' layers will not affect the rest L-L^' layers given the same memory budget E_fwd-E_fwd^'. Therefore, the problem satisfies the optimal substructure property for dynamic programming. §.§.§ Dynamic Programming Algorithm Based on the state transition formula in Eq. <ref>, we illustrate our overall dynamic programming algorithm in Algorithm <ref>. Given a device memory budget E, we use E_fwd (E_fwd≤ E) as the forward memory budget, and the rest part E-E_fwd is spared for the backward peak memory E_b(L)=E_all(L)-E_f(L). To maximize the memory utility, we gradually increase and traverse E_fwd (line 11) to optimize C(L, E_fwd) using state transition formula in Eq. <ref> (line 14). Then, we calculate the overall memory consumption E_all(L) with the searched strategies 𝒮 (line 18), and check the validity of the overall memory (line 19), i.e., E_all(L)≤ E. If the overall memory doesn't exceed E, we continue the optimization with a larger E_fwd. Finally, we find the largest forward memory budget E_fwd^opt with its searched strategies 𝒮^opt satisfying E_all(L)≤ E, and the optimized throughput C(L, E_fwd^opt) as well as the optimal strategy 𝒮^opt are the final output (line 26). Besides, to avoid unnecessary checks, we calculate an upper bound of the backward peak memory E_b(L), b_up = max_l=1^Lmax_S_j∈ S O_b(l,S_j), and only verify its validity when E-b_up < E_fwd≤ E, as it can be easily proved that it satisfies E_all(L) ≤ E when E_fwd≤ E-b_up. Note that, given the searched strategies, the calculation of E_all(L) with Eq. <ref> has 𝒪(L) complexity, while the state transition for all layers (line 13-15) requires 𝒪(L|S|) complexity. Therefore, the validity check does not contribute additional time complexity, and the complexity of dynamic programming algorithm <ref> is 𝒪(LE|S|). §.§ Pipeline Partition Adjustment We aim to adjust the pipeline partition 𝐩 from memory-balanced to time-balanced. Given the memory-balanced partition 𝐩_m and time-balanced partition 𝐩_t, we initialize 𝐩 as 𝐩_m. In each iteration, when adjusting 𝐩 to obtain a new partition 𝐩', the three limitations listed in Section <ref> should be satisfied. Based on the definition of α_t and α_m in Eq. <ref>, limitation 1 ensures α_t(𝐩') ≥α_t(𝐩), as the maximum stage time cost of 𝐩' is lower than 𝐩. Furthermore, limitation 2 and 3 ensure α_m(𝐩') ≥α_m(𝐩_t), as the maximum stage memory cost of 𝐩' is lower than 𝐩_t. Therefore, we can demonstrate that: α_t(𝐩_m) ≤α_t (𝐩) ≤α_t(𝐩') ≤α_t(𝐩_t), α_m(𝐩_t) ≤ α_m(𝐩') ≤α_m(𝐩_m). which indicates that 𝐩' is a potentially optimal partition as it satisfies condition <ref>, and 𝐩' is superior to the prior partition 𝐩 in terms of time balance, which is the target of our adjustment. Then, 𝐩' is pushed to the queue for subsequent search iterations. § PERFORMANCE FINE-TUNING FOR PIPEDREAM-FLUSH § MODELS AND DATASETS We introduce more details about the model configurations in Table <ref>, and also some statistics of datasets in Table <ref>. In order to elucidate the effectiveness of our embedding learning system, we replace dense features in Amazon, Reddit and ogbn-mag with a trainable node-id embedding. In addition, we include each product's word of bag feature as sparse input in Amazon. And in ogbn-mag, we include each paper's author and field of interest as additional sparse features. We use the train/eval split provided in the original papers of these datasets. Note that, the embedding size is set to be 128 for almost of our experiments except for the last model scalability in Figure <ref>. The largest model in our experiments contains more than one trillion embedding parameters (i.e., 33.76M×4096).
http://arxiv.org/abs/2307.02415v1
20230705163732
Density-Sensitive Algorithms for $(Δ+ 1)$-Edge Coloring
[ "Sayan Bhattacharya", "Martín Costa", "Nadav Panski", "Shay Solomon" ]
cs.DS
[ "cs.DS" ]
Geometric control of tilt transition dynamics in single-clamped thermalized elastic sheets Mark J. Bowick Received ; accepted =========================================================================================== Vizing's theorem asserts the existence of a (Δ+1)-edge coloring for any graph G, where Δ = Δ(G) denotes the maximum degree of G. Several polynomial time (Δ+1)-edge coloring algorithms are known, and the state-of-the-art running time (up to polylogarithmic factors) is Õ(min{m ·√(n), m ·Δ}),[Here and throughout the Õ notation suppresses (n) factors.] by Gabow et al. from 1985, where n and m denote the number of vertices and edges in the graph, respectively. Recently, Sinnamon shaved off a (n) factor from the time bound of Gabow et al. The arboricity α = α(G) of a graph G is the minimum number of edge-disjoint forests into which its edge set can be partitioned, and it is a measure of the graph's “uniform density”. While α≤Δ in any graph, many natural and real-world graphs exhibit a significant separation between α and Δ. In this work we design a (Δ+1)-edge coloring algorithm with a running time of Õ(min{m ·√(n), m ·Δ})·α/Δ, thus improving the longstanding time barrier by a factor of α/Δ. In particular, we achieve a near-linear runtime for bounded arboricity graphs (i.e., α = Õ(1)) as well as when α = Õ(Δ/√(n)). Our algorithm builds on Sinnamon's algorithm, and can be viewed as a density-sensitive refinement of it. § INTRODUCTION A (proper) k(-edge) coloring in a graph G is a coloring of edges, where each edge is assigned a color from the set [k] := {1,…,k}, such that no two adjacent edges have the same color. Clearly, k must be at least as large as the maximum degree Δ = Δ(G) of the graph G, and Vizing's theorem states that Δ+1 colors always suffice; in some graphs Δ+1 colors are necessary. Several polynomial time (Δ+1)-coloring algorithms are known, including a simple O(m n)-time algorithm by Misra and Gries <cit.>, which is a simplification of an earlier algorithm by Bollobás <cit.>. In 1985 Gabow et al. <cit.> presented a (Δ+1)-coloring algorithm with a running time of O(min{m ·√(n ·log n), m ·Δ·log n}). A recent work by Sinnamon <cit.> shaves off some (n) factors; specifically, Sinnamon removed the √(log n) factor from the term m √(n ·log n), to achieve a clean runtime bound of O(m √(n)). Nonetheless, up to the (n) factors, no improvement on the runtime of the algorithm of <cit.> was reported to date. We summarize this state-of-the-art result in the following theorem: For any n-vertex m-edge graph of maximum degree Δ, a (Δ+1)-edge coloring can be computed within time Õ(min{m ·√(n), m Δ}). Note that the runtime bound provided by Theorem <ref> is near-linear in bounded degree graphs. However, in most graphs of interest, the maximum degree Δ is large. The question of whether or not one can significantly improve this runtime bound in graphs of large maximum degree Δ has remained open. Bounded Arboricity. Sparse graphs, or graphs of “low density”, are of importance in both theory and practice. A key definition that captures the property of low density in a “uniform manner” is bounded arboricity, which constrain the average degree of any subgraph. Graph G has arboricity α = α(G) if m_s/n_s-1≤α, for every S⊆ V, where m_s and n_s are the number of edges and vertices in the graph induced by S, respectively. Equivalently (by the Nash-Williams theorem <cit.>), the edges of a graph of arboricity α can be decomposed into α edge-disjoint forests. While α≤Δ holds in any graph G, there might be a large separation between α and Δ; e.g., for the n-star graph we have α = 1, Δ = n-1. A large separation between α and Δ is exhibited in many natural and real-world graphs, such as the world wide web graph, social networks and transaction networks, as well as in various random distribution models, such as the preferential attachment model. Note also that the family of bounded arboricity graphs, even for α = O(1), includes all graphs that exclude a fixed minor, which, in turn include all bounded treewidth and bounded genus graphs, and in particular all planar graphs. In this work we present a near-linear time (Δ+1)-coloring algorithm in graphs of arboricity α = Õ(1). Further, building on our new algorithm, we present an algorithm that improves over the longstanding time barrier (provided by Theorem <ref>) by a factor of α/Δ, as summarized in the following theorem. For any n-vertex m-edge graph of maximum degree Δ and arboricity α, there is a randomized algorithm that computes a (Δ+1)-edge coloring within time Õ(min{m √(n)·α/Δ, m α}) = Õ(min{m √(n), m Δ}) ·α/Δ. The time bound holds both in expectation and with high probability. Remarks. * The exact runtime bound of our algorithm (including the (n) factors that are suppressed under the Õ-notation in Theorem <ref>) is the following: O(min{m √(n log n), m Δ·log n}) ·α/Δ + O(m ·log n) in expectation and O(min{m √(n)·log^1.5 n, m Δ ·log^2 n}) ·α/Δ + O(m ·log^2 n) with high probability. However, we made no attempt to optimize polylogarithmic factors in this work. * This runtime is near-linear when α = Õ(1) as well as when α = Õ(Δ/√(n)). The aforementioned improvement of Sinnamon <cit.> to the state-of-the-art runtime bound by Gabow et al. <cit.> (i.e., the removal of the √(log n) factor from the term m √(n ·log n)) was achieved via two algorithms: a simple and elegant randomized algorithm and a more intricate deterministic algorithm. Our algorithm follows closely Sinnamon's randomized algorithm, with one key difference: we give precedence to low degree vertices and edges over high degree ones, where the degree of an edge (which we shall refer to as its weight) is the minimum degree of its endpoints. Our algorithm can thus be viewed as a degree-sensitive refinement of Sinnamon's algorithm. The analysis of our algorithm combines several new ideas to achieve the claimed improvement in the running time. § PRELIMINARIES We denote the degree of a vertex v by (v). The weight w(e) of an edge e = (u,v) is defined as the minimum degree of its endpoints, i.e., w(e):=min{(u),(v)}. The weight w(G) of a graph G = (V,E) is defined as the sum of weights over its edges, i.e., w(G) :=∑_e ∈ E w(e). Note that the weight w(G) of any m-edge graph G satisfies w(G) ≥ m. The following claim, due to <cit.>, asserts that the weight of any m-edge graph exceeds m by at most a factor of α. [Lemma 2 in <cit.>] For any m-edge graph G with arboricity α, we have w(G)=O(mα). In what follows, let G=(V,E) be an arbitrary n-vertex m-edge graph, and we let Δ to the maximum degree of G. For any integer k ≥ 1, let [k] denote the set {1,2,...,k}. A (proper) partial k-(edge-)coloring χ of G is a color function χ:E→ [k]⋃{} such that any two distinct colored edges e_1, e_2 that share an endpoint do not receive the same color. An edge e with χ(e)∈ [k] is said to be colored (by χ), whereas an edge e with χ(e)= is said to be uncolored. If all the edges in G are colored by χ, we say that χ is a (proper) k-(edge-)coloring. Given a partial k-coloring χ for G, we define M(v) as the set of missing colors of v, i.e., the set of colors in the color palette [k] not occupied by any of the incident edges of v. In particular, for a partial (Δ+1)-coloring, M(v) is always nonempty, as v has at most Δ neighbours and there are Δ + 1 colors to choose from. Consider a partial k-coloring χ, where all the edges in G are colored but l. We define (G,χ) = as the set of m - l colored edges of G and (G,χ) = as the set of l uncolored edges of G. In addition, if all edges of G are uncolored, i.e., l=m, we say that χ is an empty coloring. §.§ Data Structures We work in the standard word RAM model of computation, with words of size w := Θ(log n). In particular, we can index any of the 2^O(w) = (n) memory addresses, perform basic arithmetic on w-bit words, and sample w-bit random variables, all in constant time. Next, we briefly describe the data structures used by our algorithms; it is readily verified that the initialization time of the data structures as well as their space usage is linear in the graph size. First, for every (non-isolated) vertex v we maintain a hash table of size O((v)) that contains, for every color c ∉ M(v), a pointer to the edge incident to v colored c. Clearly, O(1) (expected) time suffices for checking whether or not any color c is missing on v and for finding the edge that occupies color c if c∉ M(v), as well as for updating the hash table following a color c ∈ M(v) that leaves M(v) or vice versa. To quickly find a missing color on any (non-isolated) vertex v, we maintain a non-empty list of all the colors in M(v)∩ [(v)+1], accompanied with an array of size (v)+1 of the color palette [(v)+1], with mutual pointers between the two. Clearly, O(1) time suffices for finding an arbitrary missing color on v (via the list) and for updating the list and the corresponding array due to a coloring and uncoloring of any edge incident to v. Next, we argue that a random missing color on any vertex v can be found in expected time O((v)). If (v)>Δ/2, we can create an auxiliary array of all the missing colors of v in O(Δ)=O((v)) time and sample a random color from the auxiliary array. In the complementary case (v)≤Δ/2, we can just sample a random color from the entire color palette [Δ+1] repeatedly, until sampling a color that is missing on v. With probability at least 1/2 we sample a missing color on v, so the expected time of the process is O(1). Finally, we maintain the set of all uncolored edges (and a variable holding its size ||) via an array of size m, where the first || entries of the array hold pointers to the uncolored edges, and we also maintain, for each uncolored edge e, a variable ind(e) holding its respective index in the array. We can determine the number of uncolored edges in O(1) time via the variable ||. When an uncolored edge e gets colored, we first copy to position ind(e) of the array the pointer to the uncolored edge e' corresponding to position || of the array, then update the respective index ind(e') of edge e' to ind(e), and finally decrement the variable ||. When a colored edge e gets uncolored, we first increment the variable ||, then put in position || of the array a pointer to edge e, and finally set the index ind(e) of edge e to be ||. All of this clearly takes O(1) time. Using the array and the variable ||, we can pick a random uncolored edge in O(1) time. §.§ Fans In what follows we let χ be a proper partial (Δ+1)-coloring of G. A fan F is a sequence of vertices (v,x_0,...,x_t) such that: * x_0,...,x_t are distinct neighbors of v; * Edge (v,x_0) is uncolored; * For every i=1,...,t: the edge (v,x_i) is colored; * For every i=1,...,t: the color χ(v,x_i) is missing at vertex x_i-1. Vertex v is called the center of F, and x_0,...,x_t are called the leaves of F. The useful property of a fan is that rotating or shifting the colors of the fan preserves the validity of the coloring. Let F=(v,x_0,...,x_t) be a fan and write c_i =χ(v,x_i), for each i=1,...,t. To rotate or shift F from x_j means to set χ(v,x_i-1)=c_i for every i=1,...,j and make (v,x_j) uncolored. After the rotation or shift, the function χ is still a proper partial coloring, but now (v,x_j) is uncolored instead of (v,x_0). Note that M(v) is unaffected by the shift. To extend a partial (Δ+1)-coloring (i.e., increase the number of colored edges), we shall focus on maximal fans (which cannot be further extended), using the following definition. A fan F=(v,x_0,...,x_t) is said to be primed by color c_1∈ M(x_t) if one of the following two conditions hold: (1) c_1∈ M(v), or (2) c_1∈ M(x_j) for some j<t. Computing a primed fan Given an uncolored edge (v,x_0), we can get a primed fan (v,x_0,...,x_t) via the following standard algorithm (refer to <cit.>): Algorithm returns a primed fan with center v in O((v)) time. By the description of the algorithm, it is immediate that returns a primed fan. The number of iterations of the while loop is bounded by (v), since every iteration in which the loop does not terminate adds a new neighbor of v as a leaf of F. To complete the proof, we argue that each iteration can be implemented in constant time. We can store the at most (v) leaves that are added to F in a hash table, so that line 9 can be implemented in constant time. The remaining part of an iteration can be carried out in constant time in the obvious way using the aforementioned data structures. §.§ Alternating Paths We say that path P is a (c_0,c_1)-alternating path, for a pair c_0, c_1 of distinct colors, if P consists of edges with colors c_0 and c_1 only. We say that a (c_0,c_1)-alternating path P is maximal if P=(v_0,v_1,…,v_|P|) and both v_0 and v_|P| have only one edge colored by c_0 and c_1 (hence P cannot be extended further). Although a maximal alternating path may form a cycle, we shall focus on simple paths. The useful property of a maximal alternating path is that flipping the colors of the path edges preserves the validity of the coloring. That is, let P = e_1∘…∘ e_|P| be a maximal (c_0,c_1)-alternating path such that for every i=1,…,|P|: if i ≡ 1 2 then χ(e_i)=c_0 and if i ≡ 0 2 then χ(e_i)=c_1. Flipping P means to set the function χ, such that for every i=1,...,|P|: if i ≡ 1 2 then χ(e_i)=c_1 and if i ≡ 0 2 then χ(e_i)=c_0; the resulting function χ after the flip operation is a proper partial edge-coloring. Flipping a maximal (c_0,c_1)-alternating path that starts at a vertex v is a useful operation, as it "frees" for v the color c_0 that is occupied by it, replacing it with the missing color c_1 on v. §.§ Algorithm : Naively Extending A Coloring by One Edge Given an uncolored edge e, there is a standard way to color it and by that to extend the coloring, using a primed fan F rooted at one of the endpoints of e, say v, as well as a maximal alternating path P starting at v, by flipping the path and then rotating a suitable prefix of the fan, as described in the following algorithm (refer to <cit.>): Algorithm , when given as input a proper partial (Δ+1)-coloring χ, the first uncolored edge (v,x_0) of a fan F primed by a color c_1 and a maximal (c_0,c_1)-alternating path P, where c_0 is free on v, extends the coloring χ into a proper partial (Δ+1)-coloring, such that e, as well as all previously colored edges by χ, is also colored by χ. Moreover, this algorithm takes O(|F|+|P|)=O((v)+|P|) time. This algorithm provides the standard way to extend a coloring by one edge. (We omit the correctness proof for brevity; refer to <cit.> for the proof.) As for the running time, there are three different cases to consider: (1) c_1∈ M(v), (2) c_1∉ M(v) and w x_j-1, (3) c_1∉ M(v) and w= x_j-1. Using our data structures (described in Section <ref>), one can identify the case and find the leaf x_j and the endpoint w (if needed) in time O(|F|+|P|). Beyond that, in each case the algorithm performs at most one path flip, one fan shift, and one edge coloring, which also takes time O(|F|+|P|). As F contains at most (u) leaves, the total running time is O(|F|+|P|) = O((v)+|P|). § A (Δ+1)-COLORING ALGORITHM WITH RUNTIME Õ(M Α) §.§ Internal Edges of Maximal Alternating Paths The paths that we shall consider are maximal alternating paths that start and finish at different vertices (i.e., we do not consider cycles). A vertex v in a path P is called internal if it is not one of the two endpoints of P. An edge e = (u,v) of path P is called internal if both u and v are internal vertices of P. For a path P, denote by I(P) the set of internal edges of P. We shall use the following immediate observation later. For any path P, |P| ≤ |I(P)|+2. Any edge may serve as an internal edge in possibly many different maximal alternating paths. The following key lemma bounds the number of such paths by the edge's weight. Any colored edge e can be an internal edge of at most w(e) maximal alternating paths. Let e=(u,v) be a colored edge with color c_e. For any color c_2 c_e, we note that e can be internal in a maximal (c_e,c_2)-alternating path P only if each among u and v is incident on an edge with color c_2, thus the number of such colors c_2 is bounded by min{(u),(v)}=w(e). To complete the proof, we note that there is at most one maximal (c_e,c_2)-alternating path that contains e, for any color c_2. For a graph G with a given coloring χ, denote by MP = MP(G,χ) the set of maximal alternating paths in G induced by χ. ∑_P∈ MP|I(P)|≤∑_e ∈w(e). ∑_P∈ MP |I(P)|  = ∑_P ∈ MP∑_e ∈ I(P) 1  = ∑_e ∈ E∑_P∈ MP: e ∈ I(P) 1  = ∑_e ∈∑_P∈ MP: e ∈ I(P) 1  ≤ ∑_e ∈ w(e), where the last inequality follows from Lemma <ref>, as ∑_P∈ MP: e ∈ I(P) 1 is just the number of maximal alternating paths in which the colored edge e is an internal edge. §.§ Algorithm Color-One-Edge: An Algorithm for Coloring a Single Edge The following algorithm is based on Algorithm of <cit.>, but with one crucial tweak: Given the chosen random edge, we focus on the endpoint of the edge of lower degree. Let G be a graph of maximum degree Δ and let χ be a partial (Δ + 1)-coloring of all the edges in G but l. Then the expected runtime of on G and χ is O(w(G)/l) = O(m α/l). Denote the runtime of Algorithm = (G,χ) by T(). Let e_r = (u_r,v_r) be the random uncolored edge chosen in line 1 of the algorithm, with (u_r) ≤(v_r), let c_r be the random missing color chosen in line 4 of the algorithm, and let P_r be the maximal alternating path starting at u_r obtained in line 5 of the algorithm. By definition, both |P_r| and (u_r) are random variables. We will prove the lemma as a corollary of the following three claims: [T()] = O([|P_r]|+ [(u_r)]). [|P_r|]=O(1 + 1/l∑_e ∈ w(e)). [(u_r)]=O(1/l∑_e ∈ w(e)). As described in Section <ref>, we can pick a random uncolored edge in constant time. In addition, we can pick a random missing color on a vertex in (expected) time linear to its degree. Also, we can compute the path P_r in O(|P_r|) time by repeatedly adding edges to it while possible to do. Consequently, by Lemmas <ref> and <ref>, which bound the running time of Algorithms and by O(|P_r|)+O((u_r)), we conclude that [T()] = O([|P_r|] +[(u_r)]). By Observation <ref> [|P_r|] ≤[|I(P_r)|]+O(1). We next prove that |I(P_r)|]=O(1/l∑_e ∈ w(e)). For every maximal alternating path P in MP, let u_0(P) and u_|P|(P) be the first and last endpoints of P, respectively, and let c_0(P) and c_|P|(P) be the missing colors of u_0(P) and u_|P|(P) from the two colors of P, respectively. For a vertex v, denote by l(v) the number of uncolored edges incident on v. Note that for every P in MP, (P_r=P) ≤ (c_r=c_0(P)  |  u_r = u_0(P)) ·(u_r =u_0(P))  + (c_r=c_|P|(P)  |  u_r =u_|P|(P)) ·(u_r = u_|P|(P)) ≤ 1/|M(u_0(P))|·l(u_0(P))/l + 1/|M(u_|P|(P))|·l(u_|P|(P))/l ≤ 1/|M(u_0(P))|·|M(u_0(P))|/l + 1/|M(u_|P|(P))|·|M(u_|P|(P))|/l = 2/l. It follows that |I(P_r)| = ∑_P∈ MP(P_r=P) · |I(P)|  ≤ ∑_P∈ MP2/l· |I(P)|               ∑_e ∈2/l· w(e)  =  O(1/l∑_e ∈ w(e)). Define D_m(u) to be the set of uncolored edges e=(u,v), such that (u)≤(v). Now, (u_r= u)  ≤ (e_r =(u,v) ∩(u) ≤(v))  = |D_m(u)|/l. Consequently, [(u_r)] = ∑_u ∈ V(u_r=u)·(u)  ≤ ∑_u ∈ V|D_m(u)|/l·(u)  = 1/l∑_u ∈ V∑_e ∈ D_m(u)(u) = 1/l∑_e ∈∑_x: e ∈ D_m(x)(x)  ≤ 1/l∑_e=(u,v) ∈ 2·min{(u),(v)} =  O(1/l∑_e ∈ w(e)). Now we are ready to complete the proof of Lemma <ref>. Using Claims <ref>, <ref> and <ref> we get [T()]  =  O([|P_r|]+ [(u_r)])  =  O(1 + 1/l∑_e ∈ w(e) + 1/l∑_e ∈ w(e))  =  O(w(G)/l). Recalling that w(G) = O(m α) holds by Claim <ref> completes the proof. §.§ Algorithm Color-Edges: Coloring a Single Edge Iteratively The following algorithm is given as input a graph G and a partial (Δ+1)-coloring χ, and it returns as output a proper (Δ+1)-coloring for G. This algorithm is identical to the analogous one by <cit.>; however, here we invoke our modified Algorithm in line 2 rather than the analogous one by <cit.>. The expected runtime of Algorithm on a graph G with an empty partial (Δ+1)-coloring χ is O(w(G)log n) = O(m αlog n). As described in Section <ref>, we can check whether = ∅ in constant time. Since each call to Algorithm colors a single uncolored edge, the while loop consists of m iterations. Also, the runtime of each iteration is that of the respective call to . At the beginning of the ith iteration, the number of uncolored edges l is m-i+1, so by Lemma <ref>, the expected runtime of the ith iteration, denoted by [T(i )], is O(w(G)/m-i+1). It follows that [T(]  = ∑_i=1^m[T(i )] =  O(∑_i=1^mw(G)/m-i+1) =  O(w(G)log n). Recalling that w(G) = O(m α) holds by Claim <ref> completes the proof. § OUR (Δ+1)-COLORING ALGORITHM: RECURSIVE-COLOR-EDGES In this section we present our (Δ+1)-edge-coloring algorithm, , which proves Theorem <ref>. Our algorithm is similar to that of <cit.>, except for a few small yet crucial tweaks, one of which is that we employ our algorithm as a subroutine rather than the analogous subroutine from <cit.>. We first describe the approach taken by <cit.>, and then present our algorithm and emphasize the specific modifications needed for achieving the improvement in the running time. §.§ The Approach of Sinnamon <cit.> The algorithm of <cit.> employs a natural divide-and-conquer approach. It partitions the input graph into two edge-disjoint subgraphs of maximum degree roughly Δ/2, then it recursively computes a coloring with at most Δ/2 + 2 colors for each subgraph separately, and then it stitches together the two colorings into a single coloring. Naively stitching the two colorings into one would result in up to Δ+4 colors, so the idea is to prune excessive colors and then deal with the remaining uncolored edges via a separate coloring algorithm. In more detail, the algorithm consists of four phases: Partition, Recurse, Prune, and Repair. * Partition. The algorithm partitions the edges of the graph into two edge-disjoint subgraphs, such that the edges incident on each vertex are divided between the two subgraphs almost uniformly. This in particular implies that the maximum degree in each subgraph is roughly Δ/2. Such a partition can be achieved by a standard procedure, Euler partition, which was used also by <cit.>. For completeness, in Section <ref> we describe this procedure and prove some basic properties that will be used later. * Recurse. The algorithm recursively computes a coloring with at most Δ/2+2 colors for each subgraph separately, where the two colorings use disjoint palettes of colors. Then, we combine the two colorings into one by simply taking their union, which results with a proper coloring with at most Δ+4 colors. * Prune. At this point, the number of colors used in the coloring is Δ', for Δ' ≤Δ+4, which exceeds the required bound of Δ+1. To prune the up to three extra colors, the algorithm groups the edges into color classes, chooses the Δ' - (Δ + 1) color classes of smallest size, and then uncolors all edges in those color classes. In our algorithm, we choose the Δ' - (Δ + 1) color classes of smallest weight, where the weight of a color class is the sum of weights of edges with that color. * Repair. To complete the partial coloring into a proper (Δ+1)-coloring, each of the uncolored edges resulting from the Prune phase has to be recolored; this is done by a separate coloring algorihtm. In our algorithm, this separate coloring algorithm is Algorithm from Section <ref>. §.§ The Procedure An is a partition of the edges of a graph into a set of edge-disjoint tours, such that every odd-degree vertex is the endpoint of exactly one tour, and no even-degree vertex is an endpoint of any tour (some tours may be cycles). Such a partition can be computed greedily in linear time, by simply removing maximal tours of the graph until no edges remain. The edges of the graph can then be split between the subgraphs by traversing each tour and alternately assigning the edges to the two subgraphs. Internal edges of the tours split evenly between the two subgraphs, so only two cases may cause an unbalanced partition: * Case 1: The two endpoints of any tour. The only edge incident on any tour endpoint is assigned to one of the two subgraphs, which causes at most 1 extra edge per vertex in one of the two subgraphs. * Case 2: A single vertex in any tour that is an odd length cycle. The cycle edges can be split evenly among all vertices of the cycle but one, the starting vertex, for which there are 2 extra edges in one of the two subgraphs. Note that the algorithm is free to choose (1) any of the cycle vertices as the starting vertex, and (2) the subgraph among the two to which the 2 extra edges would belong; the algorithm will carry out these choices so as to minimize the discrepancy in degrees of any vertex. The next observation follows directly from the description of the procedure (and by handling the two aforementioned cases that may cause an unbalanced partition in the obvious way). Let G be a graph and let G_1,G_2 be the subgraphs of G obtained by the procedure. Then for every vertex v∈ G, the degrees of v in G_1 and G_2, denoted by _G_1(v) and _G_2(v) respectively, satisfy 1/2_G(v)-1  ≤ _G_1(v),_G_2(v)  ≤ 1/2_G(v)+1. §.§.§ Analysis of the procedure Let G be an n-vertex m-edge graph, and consider any algorithm A that applies the procedure recursively. That is, upon recieving the graph G as input, Algorithm A splits G into two subgraphs G_1 and G_2 using the procedure, and then recursively applies Algorithm A on G_1 and G_2. Consider the binary recursion tree of algorithm A: The first level L_0 consists of the root node that corresponds to the graph G. The next level L_1 consists of two nodes that correspond to the two subgraphs of G obtained by applying the procedure on G. In general, the ith level L_i consists of 2^i nodes that correspond to the 2^i subgraphs of G obtained by applying the procedure on the 2^i-1 subgraphs of G at level L_i-1. In what follows we prove several basic properties of the recursion tree of Algorithm A, which will be useful later on. For any level L_i, any subgraph H of G at level L_i and any vertex v ∈ G, the degree _H(v) of v in H satisfies 2^-i·_G(v)-2  ≤ _H(v)  ≤  2^-i·_G(v)+2. We will prove the following stronger claim by induction on i: 2^-i·_G(v)-∑_k=1^i2^1-k ≤ _H(v)  ≤  2^-i·_G(v)+∑_k=1^i2^1-k. For the induction basis i=0, we have H = G and 2^-i·_G(v)-∑_k=1^i2^1-k = _G(v)  = _H(v)  =  2^-i·_G(v)+∑_k=1^i2^1-k. For the induction step, we assume that the claim holds for every subgraph at level L_i-1, and prove that it holds for every subgraph H at level L_i. Let H_i-1 be the subgraph at level L_i-1, such that H is one of the two subgraphs of H_i-1 at level L_i. By Observation <ref>, 1/2_H_i-1(v)-1≤_H(v)≤1/2_H_i-1(v)+1. By the induction hypothesis, 2^-i+1·_G(v)-∑_k=1^i-12^1-k ≤ _H_i-1(v)  ≤  2^-i+1·_G(v)+∑_k=1^i-12^1-k. It follows that 1/2(2^-i+1·_G(v)-∑_k=1^i-12^1-k)-1  ≤ _H(v)  ≤ 1/2(2^-i+1·_G(v)+∑_k=1^i-12^1-k)+1, and we get that 2^-i·_G(v)-∑_k=1^i2^1-k ≤ _H(v)  ≤  2^-i·_G(v)+∑_k=1^i2^1-k. Recall that Δ = Δ(G) denotes the maximum degree of G and let W = w(G) =∑_e ∈ E w(e) denote the weight of G. We also define: Δ_i:=2^-iΔ and W_i:=2^-iW. For any level L_i and any subgraph H of G at level L_i, the maximum degree Δ_H of H satisfies Δ_i-2  ≤ Δ_H ≤ Δ_i+2. Let v be a vertex with maximum degree in H. By Claim <ref>, Δ_H = _H(v)  ≤  2^-i·_G(v)+2  ≤  2^-i·Δ+2  = Δ_i+2. Now let u be a vertex with maximum degree in G. Applying Claim <ref> again, we obtain Δ_H ≥ _H(u)  ≥  2^-i·_G(u)-2=2^-i·Δ-2  = Δ_i-2. It follows that Δ_i-2≤Δ_H≤Δ_i+2. For a subgraph H of G, let w_H be the weight function of H, i.e., w_H(e) = min{_H(u),_H(v)} is the minimum degree in H of the two endpoints of any edge e = (u,v) in H, and let w(H) =∑_e ∈ H w_H(e) denote the weight of H. For every level L_i: ∑_H∈ L_iw(H)≤ W_i+2m. ∑_H∈ L_iw(H) = ∑_H∈ L_i∑_e∈ Hw_H(e)  = ∑_H∈ L_i∑_(u,v)∈ Hmin{_H(u),_H(v)}    ∑_H∈ L_i∑_(u,v)∈ H(2^-imin{_G(u),_G(v)}+2), Noting that every edge e in G appears in exactly one subgraph of G at level L_i, it follows that ∑_H∈ L_iw(H) ≤ ∑_H∈ L_i∑_(u,v)∈ H(2^-imin{_G(u),_G(v)}+2)  = ∑_(u,v)∈ G(2^-imin{_G(u),_G(v)}+2) = ∑_e∈ G(2^-iw_G(e)+2)  = 2^-i∑_e∈ Gw_G(e)+2m  =  W_i+2m. §.§ The Algorithm In this section we present our (Δ + 1)-coloring algorithm. As mentioned, our algorithm follows closely that of <cit.>, but we introduce the following changes: (1) In the Prune phase, we uncolor edges from the color class of minimum weight (rather than minimum size); (2) in the Repair phase, we invoke our modified algorithm rather than the analogous one by <cit.>; (3) our recursion terminates when Δ≤ 2√(n/log n), whereas in the algorithm of <cit.> the termination condition is Δ≤ 1. §.§ Analysis of The Algorithm We consider the binary recursion tree of Algorithm , and note that this algorithm can assume the role of Algorithm A in Section <ref>; in particular, we follow the notation of Section <ref>. For any subgraph H=(V_H,E_H) of G at level L_i with m_H edges, maximum degree Δ_H and weight W_H, the expected runtime spent on H due to the call to on G is bounded by O(m_H+W_H/Δ_H·(n/Δ_H+log n)). Consider first the case Δ_H≤ 2√(n/log n). The time of the creation of the empty coloring (in line 2 of the code) is O(m_H). By Lemma <ref>, the expected time spent on H while running (in line 3) is bounded by O(W_Hlog n)  = O(m_H+W_Hn/Δ_H^2)  =  O(m_H+W_H/Δ_H·n/Δ_H) = O(m_H+W_H/Δ_H·(n/Δ_H+log n)). We may henceforth assume that Δ_H>2√(n/log n). Next, we analyze the time required by every phase of the algorithm. We first note that the first three phases of the algorithm can be implemented in O(m_H) time: * Partition: The procedure takes O(m_H) time, as explained in Section <ref>. * Recurse: We only consider the time spent at the Recurse phase on H itself, i.e., the time needed to create the two empty colorings χ_1 and χ_2 and the time needed to merge them into χ, each of which takes time O(m_H). * Prune: In O(m_H) time we can scan all edges and group them into color classes, compute the weight of each color class, and then find the up to three color classes of lowest weight. The same amount of time suffices for uncoloring all edges in those three color classes, thereby removing those colors from χ. It remains to bound the time required for the Repair phase, denoted by T(Repair H). We will prove that [T(Repair H)] = O(W_H/Δ_H·(n/Δ_H+log n)), and conclude that total expected time of the algorithm is O(m_H+W_H/Δ_H·(n/Δ_H+log n)). We shall bound the expected time for coloring the uncolored edges via Algorithm , where the uncolored edges are the ones that belong to the three color classes of minimum weight (We may assume w.l.o.g. that exactly three colors have been uncolored in H, out of a total of Δ_H+4 colors, by simply adding dummy color classes of weight 0). By an averaging argument, the total weight of the uncolored edges is bounded by 3/Δ_H+4· W_H=O(W_H/Δ_H). Let PCL(H) be the set of all possible partial (Δ_H+1)-colorings of H, and for every coloring χ∈ PCL(H) let U(χ) denote the number of edges of H that are uncolored by χ. In addition, let PCL(H,l) be the set of all colorings χ∈ PCL(H) with U(χ)=l. Note that the partial coloring obtained at the beginning of the Repair phase is not deterministic, and let χ_0 denote this random partial coloring. Thus, U=U(χ_0) is a random variable. Fix an arbitrary integer l ≥ 0. Under the condition U=l, Algorithm consists of l iterations that color uncolored edges via Algorithm . For every iteration i=1,...,l, let χ_i∈ PCL(H,l+1-i) be the random partial coloring at the beginning of the iteration, let e_i=(u_i,v_i) be the random uncolored edge chosen in line 1 of Algorithm , with (u_i)≤(v_i), let c_i be the random missing color of u_i chosen in line 4 of that algorithm, and let P_i be the maximal alternating path starting at u_i obtained in line 5 of that algorithm. Each uncolored edge e_i is colored via a call to Algorithm , whose expected time is dominated (by Claim <ref>) by O([|P_i|] + [(u_i)]). So the total expected time for coloring all the l uncolored edges under the condition U=l, namely [T(Repair H) | U=l], satisfies [T(Repair H) | U=l]  = ∑_i=1^l O([|P_i| | U=l]) + ∑_i=1^l O([(u_i) | U=l]). We will prove the following two claims, from which Eq. <ref> will follow. For any integer l ≥ 0, ∑_i=1^l[(u_i) | U=l] = O(W_H/Δ_H). For any integer l ≥ 0, ∑_i=1^l[|P_i| U=l] = O(W_H/Δ_H·(n/Δ_H+log n)). For edge e_i=(u_i,v_i), u_i is chosen as it satisfies (u_i) ≤(v_i), so (u_i)=w(e_i). To complete the proof, we note that Eq. <ref> implies that the total weight of the uncolored edges is bounded (deterministically) by O(W_H/Δ_H). We say that a vertex v has high degree if (v) ≥Δ_H/2, and it has low degree otherwise. Similarly, we say that edge e has high weight if w(e) ≥Δ_H/2 (i.e. both its endpoints have high degree), and it has low weight otherwise. We need to bound [∑_i=1^l|P_i| U=l]. We will bound the expected sum of the lengths of paths of high weight edges (i.e., paths P_i such that the corresponding edge e_i has high weight) separately from the expected sum of the lengths of paths of low weight edges (i.e., paths P_i such that e_i has low weight); in fact, for high weight edges, the upper bound will hold deterministically. High weight edges. Eq. <ref> implies that the total weight of the uncolored edges is bounded by O(W_H/Δ_H), hence the number of uncolored edges of high weight is bounded by O(W_H/Δ_H·1/Δ_H/2) =  O(W_H/Δ_H^2). Since each chosen alternating path is simple, and as such has at most n-1 edges, we can bound the sum of the lengths of the paths of high weight edges (deterministically) by O(n ·W_H/Δ_H^2)  =  O(W_H/Δ_H·n/Δ_H)  =  O(W_H/Δ_H·(n/Δ_H+log n)). Low weight edges. Let S_H be the random variable given by the sum of lengths of the paths of the low weight edges. We have to bound [S_H | U=l]. For every 1 ≤ i ≤ l, let X_i be the random indicator variable that gets value 1 iff the edge colored at the ith iteration has low weight, and define the random variable Y_i :=|I(P_i)|· X_i. Observe that [S_H | U=l] = [∑_1≤ i≤ l: X_i≠ 0|P_i|   | U=l]  = [∑_i=1^l|P_i|· X_i   | U=l] = ∑_i=1^l[|P_i|· X_i | U=l]                     ∑_i=1^l[(|I(P_i)|+O(1))· X_i | U=l] = ∑_i=1^l([|I(P_i)|· X_i | U=l]+O(1))  = ∑_i=1^l([Y_i | U=l]+O(1)). Let us fix an arbitrary iteration i, 1 ≤ i ≤ l. We shall bound [Y_i | U=l] in two steps: First, we fix an arbitrary coloring χ∈ PCL(H,l+1-i) and bound [Y_i | χ_i=χ ∩  U = l] = [Y_i | χ_i=χ]; second, we bound [Y_i | U=l]. Given the coloring χ, for every path P in MP(H,χ), let u_0(P) and u_|P|(P) be the two endpoints of P, and let c_0(P) and c_|P|(P) be the missing colors on u_0(P) and u_|P|(P) from the two colors of P, respectively. We have [Y_i | χ_i=χ] = ∑_P∈ MP(H,χ)(0·( | χ_i=χ) + |I(P )|·( | χ_i=χ)) = ∑_P∈ MP(H,χ)|I(P )|·( | χ_i=χ). Now, for every vertex v let low(χ,v) be the number of edges of H uncolored by χ with low weight, incident on v. Observe that for every path P ∈ MP(H,χ) : ( | χ_i=χ) ≤  ()·(u_i=u_0(P) | χ_i=χ) + ()·(u_i=u_|P|(P) | χ_i=χ) ≤ 1/Δ_H/2+1·low(χ,u_0(P))/U(χ)+ 1/Δ_H/2+1·low(χ,u_|P|(P))/U(χ) ≤ 1/Δ_H/2+1·3/l+1-i+ 1/Δ_H/2+1·3/l+1-i = O(1/(l+1-i) ·Δ_H), where the second inequality holds as the degrees of low degree vertices are smaller than Δ_H/2 by definition, hence any low degree vertex has at least Δ_H+1-Δ_H/2=Δ_H/2+1 missing colors to randomly choose from; the third inequality holds as U(χ) = l+1 -i by definition and as the uncolored edges of any possible χ_i were previously colored by (at most) three colors, hence by the validity of the coloring every vertex may have at most three uncolored edges incident on it. By plugging Eq. <ref> into Eq. <ref>, we obtain [Y_i | χ_i=χ] = ∑_P∈ MP(H,χ)|I(P)|·( | χ_i=χ)  =  O(1/(l+1-i)·Δ_H∑_P∈ MP(H,χ)|I(P)|)       O(1/(l+1-i) ·Δ_H∑_e∈(H,χ)w(e))  =  O(1/(l+1-i) ·Δ_H∑_e∈ E_Hw(e)) = O(W_H/(l+1-i) ·Δ_H). And we get [Y_i | U=l] = ∑_χ∈ PCL(H,l+1-i)[Y_i | χ_i=χ ∩  U=l]·(χ_i=χ | U=l) = ∑_χ∈ PCL(H,l+1-i)[Y_i | χ_i=χ]·(χ_i=χ | U=l) = ∑_χ∈ PCL(H,l+1-i)O(W_H/(l+1-i) ·Δ_H)·(χ_i=χ | U=l) = O(W_H/(l+1-i) ·Δ_H)·∑_χ∈ PCL(H,l+1-i)(χ_i=χ | U=l) = O(W_H/(l+1-i) ·Δ_H) Note that the number l of uncolored edges is bounded by the total weight of the uncolored edges, which is O(W_H/Δ_H) by Eq. <ref>. Now, plugging Eq. <ref> into Eq. <ref> yields [S_H | U=l] = ∑_i=1^l([Y_i | U=l]+O(1))  =  O(∑_i=1^l(W_H/(l+1-i)Δ_H+1) ) = O(W_H/Δ_H∑_i=1^l1/l+1-i)+O(l)  =  O(W_H/Δ_H·log n)+O(W_H/Δ_H) = O(W_H/Δ_H·(n/Δ_H+log n)). This completes the proof of Claim <ref>. Now we are ready to complete the proof of Lemma <ref>. Using Eq. <ref> and Claims <ref> and <ref> we conclude that [T(Repair H)] = ∑_l=0^m_H[T(Repair H) | U=l]·(U=l) = ∑_l=0^m_H(∑_i=1^l O([|P_i| | U=l]) + ∑_i=1^l O([(u_i) | U=l]))·(U=l) = ∑_l=0^m_HO(W_H/Δ_H·(n/Δ_H+log n)+W_H/Δ_H)·(U=l) = ∑_l=0^m_H O(W_H/Δ_H·(n/Δ_H+log n))·(U=l) = O(W_H/Δ_H·(n/Δ_H+log n)) ·∑_l=0^m_H(U=l)  = O(W_H/Δ_H·(n/Δ_H+log n)). For any level L_i, the expected total time spent on all the subgraphs in L_i due to calls to Algorithm while running on G is bounded by O(m+W_i+m/Δ_i·(n/Δ_i+log n)). By Claim <ref>, Claim <ref> and Lemma <ref>, the expected total time spent on all subgraphs in L_i due to calls to Algorithm is bounded by ∑_H∈ L_iO(m_H+W_H/Δ_H·(n/Δ_H+log n)) = O(∑_H∈ L_i(m_H+W_H/Δ_i·(n/Δ_i+log n))) = O(∑_H∈ L_im_H)+O(1/Δ_i·(n/Δ_i+log n)∑_H∈ L_iW_H) = O(m+W_i+m/Δ_i·(n/Δ_i+log n)). The expected runtime of Algorithm on G is bounded by O(W·min{log n,√(nlog n)/Δ}+mlog n). Remark. Claim <ref> yields W = O(m α), hence the expected runtime of the algorithm is O(m·α·min{log n,√(nlog n)/Δ}+mlog n), or equivalently, O(min{m Δ·log n,m√(nlog n)}·α/Δ+mlog n). If Δ≤ 2√(n/log n), the runtime is dominated by that of Algorithm , which by Lemma <ref> is O(Wlog n)  =  O(W·min{log n,√(nlog n)/Δ}+mlog n). In the complementary case Δ > 2√(n/log n), the recursion continues until the maximum degree is at most 2√(n/log n). By Claim <ref>, the maximum degree of every subgraph of L_i, for any i, lies in the range [2^-iΔ-2,2^-iΔ+ 2], hence the number R of recursion levels satisfies R=logΔ√(log n)/√(n)± O(1). Thus, the expected runtime of Algorithm is given by ∑_i=0^R-1 = ∑_i=0^R-1O(m+W_i+m/Δ_i·(n/Δ_i+log n)) = ∑_i=0^R-1O(m)+∑_i=0^R-1O(W_i/Δ_i·n/Δ_i)+∑_i=0^R-1O(W_i/Δ_i·log n)+∑_i=0^R-1O(m/Δ_i·n/Δ_i)+∑_i=0^R-1O(m/Δ_i·log n) = O(Rm)+∑_i=0^R-1O(W/Δ·n/2^-iΔ)+∑_i=0^R-1O(W/Δ·log n)+∑_i=0^R-1O(m/2^-iΔ·n/2^-iΔ)+∑_i=0^R-1O(m/2^-iΔ·log n) = O(Rm)+O (W · n/Δ^2∑_i=0^R-12^i)+O(W· R·log n/Δ)+O(mn/Δ^2∑_i=0^R-14^i)+O(m·log n/Δ∑_i=0^R-12^i) = O(Rm)+O(W · n/Δ^2· 2^R)+O(W· R·log n/Δ)+O(mn/Δ^2· 4^R)+O(m·log n/Δ· 2^R) = O(Rm)+O(W · n/Δ^2·Δ√(log n)/√(n))+O(W· R·log n/Δ)+O(mn/Δ^2·Δ^2log n/n)+O(m·log n/Δ·Δ√(log n)/√(n)) = O(mlog n)+O(W√(nlog n)/Δ)+O(W·log^2 n/Δ)+O(mlog n)+O(mlog n) = O(mlog n)+O(W√(nlog n)/Δ)  =  O(W·min{log n,√(nlog n)/Δ}+mlog n), where the last inequality holds as Δ > 2√(n/log n). It is straightforward to achieve the same bound on the running time (up to a logarithmic factor) with high probability. (As mentioned, we do not attempt to optimize polylogarithmic factors in this work.) Thus we derive the following corollary, which concludes the proof of Theorem <ref>. One can compute a (Δ+1)-coloring in any n-vertex m-edge graph of arboricity α and maximum degree Δ within a high probability runtime bound of O(min{m Δ·log^2 n,m √(n)log^1.5 n}·α/Δ+mlog^2 n). alpha
http://arxiv.org/abs/2307.01418v1
20230704005933
Sympathetic cooling and slowing of molecules with Rydberg atoms
[ "Chi Zhang", "Seth T. Rittenhouse", "Timur V. Tscherbul", "H. R. Sadeghpour", "Nicholas R. Hutzler" ]
physics.atom-ph
[ "physics.atom-ph" ]
[][email protected] Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA Department of Physics, the United States Naval Academy, Annapolis, Maryland 21402, USA Department of Physics, University of Nevada, Reno, Nevada 89557, USA ITAMP, Center for Astrophysics | Harvard & Smithsonian Cambridge, Massachusetts 02138, USA Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA We propose to sympathetically slow and cool polar molecules in a cold, low-density beam using laser-cooled Rydberg atoms. The elastic collision cross sections between molecules and Rydberg atoms are large enough to efficiently thermalize the molecules even in a low density environment. Molecules traveling at 100 m/s can be stopped in under 30 collisions with little inelastic loss. Our method does not require photon scattering from the molecules and can be generically applied to complex species for applications in precision measurement, quantum information science, and controlled chemistry. Sympathetic cooling and slowing of molecules with Rydberg atoms Nicholas R. Hutzler August 1, 2023 =============================================================== Cold and trapped molecules are unique quantum systems for a multitude of applications ranging from table-top search of new physics beyond the Standard Model <cit.> to quantum information processing <cit.>, which benefit from rich internal molecular structures. Recent measurements of the electron electric dipole moment (EDM) <cit.>, which rely on the strong internal electric field and the high-level control of molecule orientation, have constrained charge-parity violating new physics to ≳ 50 TeV energy scales. In addition, the molecular rotational structure provides tunable long-range dipole-dipole interaction in ground electronic states <cit.>, and may accommodate one error-corrected qubit in each molecule <cit.>, thereby significantly reducing the number of physical qubits in a quantum information processor. Laser cooling of molecules <cit.> and assembling ultracold atoms <cit.> are two of the main pathways to trapping molecules in the quantum regime, key to the next generations of new physics searches <cit.>, long-lived qubits and high-fidelity quantum gates. However, both methods require specific molecular structures, which limit the choice of molecules. Furthermore, laser cooling needs 10^4 - 10^5 photon scattering events and thus demands high precision spectroscopy, which is challenging and time-consuming for many heavy-atom containing <cit.> or large polyatomic molecules <cit.>. Molecules can also be cooled by collisions or interactions with another species. For example, in a cryogenic buffer gas beam (CBGB) <cit.>, cold helium gas thermalizes any molecular species to a few Kelvin temperature. Once in a trap, the molecules can be further cooled by opto-electrical Sisyphus cooling <cit.> or via sympathetic cooling with laser-cooled atoms  <cit.>. It would be advantageous to use laser-cooled species to sympathetically cool species in lower-density environments, such as in beams, since loading complex, reactive species into traps with sufficient atom and molecular density for efficient sympathetic cooling is often a challenge. However, due to the substantially lower density, collisions between molecules and ground state atoms are too rare for efficient cooling in a beam. Here, we propose a method to slow and cool polar molecules from a CBGB, load them into a trap, and cool to ultracold temperatures, without photon scattering from the molecule. The method relies on enhancing elastic collision cross sections between molecules and atoms via atomic Rydberg excitation, and thermalizing the molecules to laser cooled atoms. The sequence begins with a CBGB of both the molecules and atoms. Next, laser cool and slow the atoms while exciting a fraction of them to Rydberg states. The Rydberg atoms, because of their large size, collide with the molecules and bring them into approximate equilibrium with the atom velocity distribution. Once the molecules and atoms are both stopped, a position-dependent Rydberg excitation in an applied external field gradient provides collisions and compresses the molecule density for loading into a trap. Highly excited Rydberg atoms are orders of magnitude larger than ground state atoms in size, and this can compensate for the lower density and increase the collision rate. Early studies of molecule-Rydberg atom collisions at high collision energies (compared to the molecule-Rydberg atom interaction energies) measured vanishing cross sections at room temperature <cit.>. At cold temperatures, however, the collision energy becomes smaller or comparable to the molecule-Rydberg atom interaction energy. This leads to cross sections as large as the size of the Rydberg wavefunction, which is around six orders of magnitude larger than the ground states for a typical Rydberg state, and thus will enhance the collision rate for efficient sympathetic cooling. Our method has two requirements, which we will show to be generic for polar molecules. First, we require a mean free path which is short compared to the size of the atomic cloud, so that the molecules are thermalized before diffusing out of the cloud. This requires a deep atom-molecule interaction potential that is comparable to or larger than the collision energy in a CBGB (∼ 10 m/s in the moving frame, which corresponds to ∼ h × 10 GHz for heavy molecules of ∼ 200 atomic mass units, where h is Planck's constant). Second, we require that the loss probability during collisions is not too high. We will show that fewer than ∼ 30 collisions are needed to bring a molecule from a few Kelvin and ∼ 100 m/s forward velocity to ultracold temperature in a trap. This requirement is much less stringent than those imposed by the previous proposals of sympathetic cooling <cit.>, and we shall show that the loss probability per collision is sufficiently low in the general case. When a molecule is outside of a Rydberg atom, the dominant interaction is the dipole-dipole interaction <cit.>. This interaction has been used to detect and manipulate the state of the molecule <cit.>, and has been proposed for cooling trapped molecules in ∼mK temperature regime <cit.> and for entangling molecules in optical tweezers <cit.>. However, the dipole-dipole interaction is ≪ h× 100 MHz≈ k_B × 5 mK, where k_B is Boltzmann's constant. This is negligible compared to the collision energy in the ∼K temperature regime, which is of interest in this work. The picture changes drastically when the molecule is near the edge or inside the Rydberg wavefunction. As illustrated in Fig. <ref>(a), the separation between the molecule and the Rydberg core R is comparable to or smaller than the electron-core distance r, and the electron can be very close to the molecule. As a result, the dominant perturbation of the molecule on the Rydberg atom comes from the charge-dipole interaction with the Rydberg electron, H_I = ed/|R-r|^2 with d the molecule frame dipole moment. This interaction has been intensively studied in systems of ultralong range Rydberg molecules <cit.>. It couples and hence shifts the atom-molecule pair states strongly. The coupling strength is proportional to d and the electron's probability density which scales as (n^*)^-3, with n^* the effective principal quantum number. In Fig. <ref>(b), we show the interaction potentials between the CH molecule (d≈ 1.46 D) and the Li Rydberg atom as an example. We choose this atom-molecule pair since the small dipole moment of the molecule enables accurate computations; however, this method generalizes to more complex species for which exact calculations might be impractical. For the near-degenerate manifold of high angular momentum states, the pair states are well hybridized and the potentials are ∼ 10 GHz deep. The potentials for the S Rydberg states are weaker but still exceed GHz inside the Rydberg radius. For molecules with larger dipole moment, especially for d≳ 1.6 D (Fermi-Teller critical dipole <cit.>), the non-degenerate pair states with very large energy differences are also well hybridized and the Rydberg electron becomes highly localized around the molecule. The potential energy curves become complex, however; on average, interaction potentials as deep as the quantum defects (≳ 20 GHz) can be expected. More details about the interaction between large-dipole molecules and Rydberg atoms can be found in the Supplemental Material. The range of the ≳GHz potential is similar to the radius of the Rydberg wave function R_n ≈ 2(n^*)^2 a_0, where a_0 is the Bohr radius. The relative collision velocity (in the center of mass frame) in a CBGB is ≲ 10 m/s, which roughly corresponds to ≲ h×0.5 GHz collision energy for the CH molecule and the Li atom. This is much less than the potential depth, therefore the cross section is expected to be nearly the size of the potential, ∼π R_n^2. Even for heavier molecules with up to ∼ 200 atomic mass units, the ∼ h×10 GHz deep potentials are also sufficient for large cross sections. As an example, the scattering cross section of CH and Li is calculated and plotted in Fig. <ref>(c). For n^* ≈ 30, the cross section is around four orders of magnitude larger than that of the helium atom, the most common buffer gas. This large cross section enables a short mean free path even in the dilute beam outside of the buffer gas cell. As we will show later, molecules at ∼ 100 m/s initial velocity can be stopped and trapped by fewer than 30 collisions with decelerating Rydberg atoms of the same initial velocity. This enhancement of collision cross-section is general, and can be used for a variety of experimental goals. We now discuss a specific approach to implementing this method to slow, stop, trap, and cool molecules from a CBGB without laser cooling them. In a cloud of atoms, we consider the laser coupling scheme shown in Fig. <ref>(a) for Rydberg excitation and for Zeeman slowing of the atom. The excited state |e⟩ and Rydberg state |r⟩ are coupled by a blue detuned Rydberg laser with coupling strength Ω_2 and detuning Δ, the dressed eigenstate with dominant Rydberg character is |r̃⟩=cosθ|r⟩ - sinθ|e⟩, with the mixing angle θ given by tan2 θ = -Ω_2 /Δ. We consider Δ≫̸Ω_2 > Γ_e, as the Rydberg state linewidths (≲ 100 kHz) are much less than the low-lying excited state linewidth, therefore the linewidth of the dressed Rydberg state |r̃⟩ is dominated by the contribution from |e⟩, and it primarily decays to the ground state |g⟩. For most alkali (Γ_e ≈ 2π× 5 MHz) and alkaline earth (Γ_e ≈ 2π× 30 MHz) atoms, the |r̃⟩ state linewidth Γ can be tuned to around 2π× 1 MHz by mixing a few percent or less |e⟩ population. The effective coupling |g⟩↔|r̃⟩ can be tuned to around Ω≈Ω_1 Ω_2 / 2Δ = 2π× 1 MHz, yielding a value for the saturation parameter s =Ω^2/2/Δ^2 + Γ^2/4≈ 1. These parameters are suitable for most alkali and alkaline earth atoms. The time scale of an oscillation between |g⟩↔|r̃⟩ is much longer than the collision time (≪ 0.1 μ s, the time during which the molecule is inside the Rydberg wavefunction). In the Zeeman slower, the two lasers Ω_1 and Ω_2 are counter-propagating and their Doppler shifts on |g⟩↔|e⟩ and |e⟩↔|r⟩ transitions are thus opposite. A position-dependent magnetic field shifts |e⟩ (typically a P state) differently from |g⟩ and |r⟩ (S states) and thus can compensate the opposite Doppler shifts in |g⟩↔|e⟩ and |e⟩↔|r⟩ simultaneously. As a result, the Rydberg excitation scheme is not affected by the Zeeman slower for the target velocity class, and we can slow the atomic beam while maintaining a fixed Rydberg excitation fraction. In steady state with these couplings, the mean free path, l≈(σρ)^-1 where σ is the elastic collision cross section and ρ is the Rydberg atom density, is calculated and plotted in Fig. <ref>(b) as a function of total atom density (in both ground and Rydberg states) and principal quantum number. For low Rydberg states the dressed Rydberg population can reach 25% (for s=1), while for high Rydberg states it is limited by the Rydberg blockade effect <cit.>. The blue dashed line marks the parameters where Rydberg blockade starts to limit the density of atoms in Rydberg states. The collision cross section scales geometrically as (n^*)^4, until the interaction potential is too weak, but the probability of Rydberg excitation can be suppressed by Rydberg blockade, which scales as (n^*)^7. As a result, the optimal density and principal quantum number are near the blockade regime. The density of atoms made inside the buffer gas cell is typically 10^13 or 10^14 cm^-3 <cit.>. After exiting the cell, the density decreases rapidly because of expansion and collision with the buffer gas. We propose to first laser cool the atoms in all three dimensions to stop the expansion and keep the density high. The cooling needs to be in the moving frame to maintain the overlap between cooled atoms and uncooled molecules before subsequent Rydberg excitation (for collisions). At around 100 mm outside the cell downstream the atom density is 10^10 or 10^11 cm^-3 and the buffer gas collision rate is low enough for laser cooling. Normal Doppler laser cooling would not work since the atomic cloud is optically thick; instead, we consider polarization gradient cooling <cit.>. Cooling in the moving frame can be achieved by detuning the longitudinal cooling laser frequencies. Another option is to use the magneto-optical force to cool and compress the density of atoms in the two transverse directions and apply polarization gradient cooling (in the moving frame) in the forward direction. Typical polarization gradient cooling can cool the atoms to T_A < 100 μ K, after about 1 cm <cit.>, and then the atom density will stay at 10^10 - 10^11 cm^-3. Subsequently, the cloud of ultracold atoms and uncooled molecules enters the Zeeman slower, where the atoms are excited to |r̃⟩ for both slowing and collisions. The mean acceleration on the atoms is a≈s/2/1+sh Γ/m_A λ, where m_A is the mass of the atom and λ is the wavelength of the transition. During the slowing process, molecules are also slowed by collisions with Rydberg atoms. In the moving frame of the ultracold atoms, the molecules have initial kinetic energies of ∼ 1 K and are constantly accelerated by a, relative to the atoms until the cloud is stopped. The optimal mean free path l can reach 2 to 3 mm (see Fig. <ref>[b]). For atoms and molecules with similar masses, the expected molecule kinetic energy after one collision is T_M,f≈ (T_M,i + T_A)/2, where T_M,i is the kinetic energy before the collision. We assume the atoms are not heated up by the collisions since there are typically orders of magnitude more atoms than molecules <cit.>. In steady state, the kinetic energy loss of the molecules in the collision T_M,i-T_M,f≈ T_M,f is similar to the kinetic energy gained between two successive collisions by acceleration m_M a l, with m_M the molecule mass (and m_M ≈ m_A). Therefore, the collision energy T_M,i≈ 2 m_M a l ≈s/1+sl/λ h Γ. The collision energy is independent of the mass, and for typical parameters we choose (s≈ 1, Γ≈ 2π× 1 MHz) it is around h× 2 GHz, which is low enough for the interaction potential between most polar molecules and Rydberg states, though it can be further reduced by using smaller s or Γ. The laser slowing distance of the entire cloud is v^2/2a, which is proportional to the atom mass, is 0.5 m for heavy atoms with ∼ 200 atomic mass unit and an initial velocity of 100 m/s. During the slowing process, on average a molecule collides N ≈ 25 times with Rydberg atoms. This leads to a diffusion distance of l √(N)≈ 1 cm (shown in Fig. <ref>[b]), which is less than the typical size of the ultracold atomic cloud (∼ 3 cm). On average, half of the molecules in the ≈ 1 cm outer shell of the cloud may diffuse out. This results in a ≲ 30% loss assuming uniform initial distribution. Although we assume the initial forward velocity is ∼ 100 m/s, we note that slow beams with <50 m/s from CBGB have been experimentally demonstrated <cit.>, this allows for a shorter Zeeman slower and thus less diffusion loss. After Zeeman slowing, the molecules can be loaded into a magnetic trap <cit.>, or any other trap which is sufficiently deep. If further cooling is needed, the molecules could be directly laser-cooled if they can scatter a sufficient number of photons, though now with less stringent requirements on vibrational closure due to the fact that they are now already stopped. Alternatively we propose a method to capture and cool both the atoms and molecules with magneto-optical trap (MOT) but without scattering photons from the molecules. Atoms are excited to |r̃⟩ (see Fig. <ref>) on the edge of the MOT, which provides a position-dependent force to confine the atoms. Molecules can collide with the shell of Rydberg atoms (|r̃⟩) so that they are confined and further thermalized. In the meantime, as we slowly vary the frequencies of the lasers and shrink the Rydberg shell, the molecular density is compressed. After N more collisions, the molecules have temperature T_M(N) ≈ T_A + T_M(0) e^-N/2, where T_M(0) is the molecule temperature after Zeeman slowing. Within seven collisions, the molecules are thermalized to ≲ 1 mK. The shell thickness is determined by the linewidth of |r̃⟩ and the magnetic field gradient. We need a large MOT <cit.> with a thick shell of Rydberg atoms, for instance d ≈ 1 cm. The probability of molecules inside the shell diffusing out during the compression is 1-e^-√(N) l/d/2 ≈ 20%. After the compression, the molecules can be loaded into an optical dipole, magnetic, or other trap. By leveraging the large elastic collision rate between molecules and Rydberg atoms, we have shown that we can slow, trap, and cool molecules to ultracold temperatures without scattering photons from the molecule. However, there can be unwanted side-effects of inelastic collisions, including rotational state changes and charge transfer. We argue that neither of these are likely to be a major limitation in the general case. First, we consider rotational state changes. Like elastic collisions with helium <cit.>, we expect rotational states to thermalize upon collisions with non-negligible probability, resulting in a thermal rotational distribution, similar to the initial, thermalized, buffer gas cell rotational distribution. The collisions are unlikely to excite high rotational states (N≳ 5) because of their high energies, and the interaction potential and the cross section are highly independent of rotational state for low N, therefore all rotational states should be cooled. However, the rotational population may have an equivalent temperature of a few K, though that can be compressed by “rotational cooling” optical pumping methods <cit.> with a few photon scatters and negligible heating, or simply used as-is. Second, there could be charge transfer collisions <cit.>, whereby the Rydberg electron migrates from the atom to the molecule. These collisions have been studied both theoretically and experimentally <cit.>, and have been observed only for molecules with d >2.5 D dipole moments. Furthermore, the cross section depends on the principal quantum number and normally has a peak. At several principal quantum numbers away from the peak, the charge transfer loss cross section is typically ≪ 10^-12cm^2, more than two orders of magnitude smaller than the elastic cross section. As a result, the inelastic collisions should have negligible loss effects, and more than half of the molecules from the 100 m/s distribution should survive in an ultracold trap, making Rydberg atom sympathetic cooling more efficient than laser cooling with only a 10^-5 photon-scattering leakage. In summary, we have shown that polar molecules can be slowed and cooled by the collisions with laser cooled Rydberg atoms. Our method does not require scattering photons from the molecule, the interaction with Rydberg atom arises from the molecular dipole moment, and as a result, the method works generically for polar molecules. Our method provides a pathway into traps at ultracold temperatures for many molecules that do not have laser cycling transitions or that are hard to cool. This will significantly boost a host of applications using ultracold molecules, such as quantum information and table-top searches for new physics. We thank Yi Zeng, Ashay Patel, Weibin Li, Lan Cheng, Tim Steimle, Zack Lasner, Mike Tarbutt, Rosario Gonzalez-Ferez and Gerard Higgins for helpful discussions. The work at the California Institute of Technology has been supported by Gordon and Betty Moore Foundation Award GBMF7947 and Alfred P. Sloan Foundation Award G-2019-12502. C.Z. acknowledges support from the David and Ellen Lee Postdoctoral Fellowship at Caltech. T.V.T. gratefully acknowledges support from the NSF CAREER award No. PHY-2045681. H.R.S. acknowledges support from the NSF through a grant for ITAMP at Harvard University. Correspondence and requests for materials should be addressed to C.Z. (email: [email protected]). 73 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Safronova et al.(2018)Safronova, Budker, DeMille, Kimball, Derevianko, and Clark]Safronova2018 author author M. S. Safronova, author D. Budker, author D. DeMille, author D. F. J. Kimball, author A. Derevianko, and author C. W. Clark, https://doi.org/10.1103/RevModPhys.90.025008 journal journal Rev. Mod. Phys. volume 90, pages 025008 (year 2018)NoStop [Hudson et al.(2011)Hudson, Kara, Smallman, Sauer, Tarbutt, and Hinds]Hudson2011 author author J. J. Hudson, author D. M. Kara, author I. J. Smallman, author B. E. Sauer, author M. R. Tarbutt, and author E. A. Hinds, @noop journal journal Nature volume 473, pages 493 (year 2011)NoStop [Baron et al.(2014)Baron, Campbell, DeMille, Doyle, Gabrielse, Gurevich, Hess, Hutzler, Kirilov, Kozyryev, O'Leary, Panda, Parsons, Petrik, Spaun, Vutha, and West]Baron2014 author author J. Baron, author W. C. Campbell, author D. DeMille, author J. M. Doyle, author G. Gabrielse, author Y. V. Gurevich, author P. W. Hess, author N. R. Hutzler, author E. Kirilov, author I. Kozyryev, author B. R. O'Leary, author C. D. Panda, author M. F. Parsons, author E. S. Petrik, author B. Spaun, author A. C. Vutha, and author A. D. West, @noop journal journal Science volume 343, pages 269 (year 2014)NoStop [Andreev et al.(2018)Andreev, Ang, DeMille, Doyle, Gabrielse, Haefner, Hutzler, Lasner, Meisenhelder, O’Leary, Panda, West, West, and Wu]Andreev2018 author author V. Andreev, author D. G. Ang, author D. DeMille, author J. M. Doyle, author G. Gabrielse, author J. Haefner, author N. R. Hutzler, author Z. Lasner, author C. Meisenhelder, author B. R. O’Leary, author C. D. Panda, author A. D. West, author E. P. West, and author X. Wu, https://doi.org/10.1038/s41586-018-0599-8 journal journal Nature volume 562, pages 355 (year 2018)NoStop [Roussy et al.(2022)Roussy, Caldwell, Wright, Cairncross, Shagam, Ng, Schlossberger, Park, Wang, Ye, and Cornell]Roussy2023 author author T. S. Roussy, author L. Caldwell, author T. Wright, author W. B. Cairncross, author Y. Shagam, author K. B. Ng, author N. Schlossberger, author S. Y. Park, author A. Wang, author J. Ye, and author E. A. Cornell, https://doi.org/10.48550/ARXIV.2212.11841 title A new bound on the electron's electric dipole moment (year 2022), https://arxiv.org/abs/2212.11841 arXiv:2212.11841 NoStop [Truppe et al.(2013)Truppe, Hendricks, Tokunaga, Lewandowski, Kozlov, Henkel, Hinds, and Tarbutt]Truppe2013 author author S. Truppe, author R. J. Hendricks, author S. K. Tokunaga, author H. J. Lewandowski, author M. G. Kozlov, author C. Henkel, author E. A. Hinds, and author M. R. Tarbutt, https://doi.org/10.1038/ncomms3600 journal journal Nat. Commun. volume 4, pages 2600 (year 2013)NoStop [Leung et al.(2023)Leung, Iritani, Tiberi, Majewska, Borkowski, Moszynski, and Zelevinsky]Leung2023 author author K. H. Leung, author B. Iritani, author E. Tiberi, author I. Majewska, author M. Borkowski, author R. Moszynski, and author T. Zelevinsky, https://doi.org/10.1103/PhysRevX.13.011047 journal journal Phys. Rev. X volume 13, pages 011047 (year 2023)NoStop [Holland et al.(2022)Holland, Lu, and Cheuk]Holland2022 author author C. M. Holland, author Y. Lu, and author L. W. Cheuk, https://doi.org/10.48550/ARXIV.2210.06309 title On-demand entanglement of molecules in a reconfigurable optical tweezer array (year 2022), https://arxiv.org/abs/2210.06309 arXiv:2210.06309 NoStop [Bao et al.(2022)Bao, Yu, Anderegg, Chae, Ketterle, Ni, and Doyle]Bao2022 author author Y. Bao, author S. S. Yu, author L. Anderegg, author E. Chae, author W. Ketterle, author K.-K. Ni, and author J. M. Doyle, https://doi.org/10.48550/ARXIV.2211.09780 title Dipolar spin-exchange and entanglement between molecules in an optical tweezer array (year 2022), https://arxiv.org/abs/2211.09780 arXiv:2211.09780 NoStop [Lin et al.(2020)Lin, Leibrandt, Leibfried, and Chou]Lin2020 author author Y. Lin, author D. R. Leibrandt, author D. Leibfried, and author C.-w. Chou, https://doi.org/10.1038/s41586-020-2257-1 journal journal Nature volume 581, pages 273 (year 2020)NoStop [Park et al.(2017)Park, Yan, Loh, Will, and Zwierlein]Park2017 author author J. W. Park, author Z. Z. Yan, author H. Loh, author S. A. Will, and author M. W. Zwierlein, https://doi.org/10.1126/SCIENCE.AAL5066 journal journal Science volume 357, pages 372 (year 2017)NoStop [Gregory et al.(2021)Gregory, Blackmore, Bromley, Hutson, and Cornish]Gregory2021 author author P. D. Gregory, author J. A. Blackmore, author S. L. Bromley, author J. M. Hutson, and author S. L. Cornish, https://doi.org/10.1038/s41567-021-01328-7 journal journal Nature Physics volume 17, pages 1149 (year 2021)NoStop [Burchesky et al.(2021)Burchesky, Anderegg, Bao, Yu, Chae, Ketterle, Ni, and Doyle]Burchesky2021 author author S. Burchesky, author L. Anderegg, author Y. Bao, author S. S. Yu, author E. Chae, author W. Ketterle, author K.-K. Ni, and author J. M. Doyle, @noop journal journal Phys. Rev. Lett. volume 127, pages 123202 (year 2021)NoStop [Blackmore et al.(2019)Blackmore, Caldwell, Gregory, Bridge, Sawant, Aldegunde, Mur-Petit, Jaksch, Hutson, Sauer, Tarbutt, and Cornish]Blackmore2018 author author J. A. Blackmore, author L. Caldwell, author P. D. Gregory, author E. M. Bridge, author R. Sawant, author J. Aldegunde, author J. Mur-Petit, author D. Jaksch, author J. M. Hutson, author B. E. Sauer, author M. R. Tarbutt, and author S. L. Cornish, https://doi.org/10.1088/2058-9565/aaee35 journal journal Quantum Sci. Technol. volume 4, pages 014010 (year 2019)NoStop [Albert et al.(2020)Albert, Covey, and Preskill]Albert2020 author author V. V. Albert, author J. P. Covey, and author J. Preskill, https://doi.org/10.1103/PhysRevX.10.031050 journal journal Phys. Rev. X volume 10, pages 031050 (year 2020)NoStop [Barry et al.(2014)Barry, McCarron, Norrgard, Steinecker, and DeMille]Barry2014 author author J. F. Barry, author D. J. McCarron, author E. B. Norrgard, author M. H. Steinecker, and author D. DeMille, https://doi.org/10.1038/nature13634 journal journal Nature volume 512, pages 286 (year 2014)NoStop [Truppe et al.(2017)Truppe, Williams, Hambach, Caldwell, Fitch, Hinds, Sauer, and Tarbutt]Truppe2017b author author S. Truppe, author H. J. Williams, author M. Hambach, author L. Caldwell, author N. J. Fitch, author E. A. Hinds, author B. E. Sauer, and author M. R. Tarbutt, https://doi.org/10.1038/nphys4241 journal journal Nat. Phys. volume 13, pages 1173 (year 2017)NoStop [Cheuk et al.(2018)Cheuk, Anderegg, Augenbraun, Bao, Burchesky, Ketterle, and Doyle]Cheuk2018 author author L. W. Cheuk, author L. Anderegg, author B. L. Augenbraun, author Y. Bao, author S. Burchesky, author W. Ketterle, and author J. M. Doyle, https://doi.org/10.1103/PhysRevLett.121.083201 journal journal Phys. Rev. Lett. volume 121, pages 083201 (year 2018)NoStop [Caldwell et al.(2019)Caldwell, Devlin, Williams, Fitch, Hinds, Sauer, and Tarbutt]Caldwell2019 author author L. Caldwell, author J. A. Devlin, author H. J. Williams, author N. J. Fitch, author E. A. Hinds, author B. E. Sauer, and author M. R. Tarbutt, https://doi.org/10.1103/PhysRevLett.123.033202 journal journal Phys. Rev. Lett. volume 123, pages 033202 (year 2019)NoStop [Ding et al.(2020)Ding, Wu, Finneran, Burau, and Ye]Ding2020 author author S. Ding, author Y. Wu, author I. A. Finneran, author J. J. Burau, and author J. Ye, https://doi.org/10.1103/PhysRevX.10.021049 journal journal Phys. Rev. X volume 10, pages 021049 (year 2020)NoStop [Langin et al.(2021)Langin, Jorapur, Zhu, Wang, and DeMille]Langin2021 author author T. K. Langin, author V. Jorapur, author Y. Zhu, author Q. Wang, and author D. DeMille, https://doi.org/10.1103/PhysRevLett.127.163201 journal journal Phys. Rev. Lett. volume 127, pages 163201 (year 2021)NoStop [Ni et al.(2008)Ni, Ospelkaus, de Miranda, Pe'er, Neyenhuis, Zirbel, Kotochigova, Julienne, Jin, and Ye]Ni2008 author author K.-K. Ni, author S. Ospelkaus, author M. H. G. de Miranda, author A. Pe'er, author B. Neyenhuis, author J. J. Zirbel, author S. Kotochigova, author P. S. Julienne, author D. S. Jin, and author J. Ye, https://doi.org/10.1126/science.1163861 journal journal Science volume 322, pages 231 (year 2008)NoStop [Marco et al.(2019)Marco, Valtolina, Matsuda, Tobias, Covey, and Ye]DeMarco2019 author author L. D. Marco, author G. Valtolina, author K. Matsuda, author W. G. Tobias, author J. P. Covey, and author J. Ye, https://doi.org/10.1126/science.aau7230 journal journal Science volume 363, pages 853 (year 2019)NoStop [Alauze et al.(2021)Alauze, Lim, Trigatzis, Swarbrick, Collings, Fitch, Sauer, and Tarbutt]Alauze2021 author author X. Alauze, author J. Lim, author M. A. Trigatzis, author S. Swarbrick, author F. J. Collings, author N. J. Fitch, author B. E. Sauer, and author M. R. Tarbutt, https://doi.org/https://doi.org/10.1088/2058-9565/ac107e journal journal Quantum Sci. Technol. volume 6, pages 044005 (year 2021)NoStop [Fitch et al.(2020)Fitch, Lim, Hinds, Sauer, and Tarbutt]Fitch2020b author author N. J. Fitch, author J. Lim, author E. A. Hinds, author B. E. Sauer, and author M. R. Tarbutt, https://doi.org/10.1088/2058-9565/abc931 journal journal Quantum Sci. Technol. volume 6, pages 014006 (year 2020)NoStop [Augenbraun et al.(2020)Augenbraun, Lasner, Frenett, Sawaoka, Miller, Steimle, and Doyle]Augenbraun2020 author author B. L. Augenbraun, author Z. D. Lasner, author A. Frenett, author H. Sawaoka, author C. Miller, author T. C. Steimle, and author J. M. Doyle, https://doi.org/10.1088/1367-2630/ab687b journal journal New J. Phys. volume 22, pages 022003 (year 2020)NoStop [Kozyryev and Hutzler(2017)]Kozyryev2017b author author I. Kozyryev and author N. R. Hutzler, https://doi.org/10.1103/PhysRevLett.119.133002 journal journal Phys. Rev. Lett. volume 119, pages 133002 (year 2017)NoStop [Isaev et al.(2010)Isaev, Hoekstra, and Berger]Isaev2010 author author T. A. Isaev, author S. Hoekstra, and author R. Berger, https://doi.org/10.1103/PhysRevA.82.052521 journal journal Phys. Rev. A volume 82, pages 052521 (year 2010)NoStop [Zhang et al.(2022)Zhang, Zhang, Cheng, Steimle, and Tarbutt]Zhang2022 author author C. Zhang, author C. Zhang, author L. Cheng, author T. C. Steimle, and author M. R. Tarbutt, https://doi.org/10.1016/j.jms.2022.111625 journal journal Journal of Molecular Spectroscopy volume 386, pages 111625 (year 2022)NoStop [Persinger et al.(2022)Persinger, Han, Le, Steimle, and Heaven]Persinger2022 author author T. D. Persinger, author J. Han, author A. T. Le, author T. C. Steimle, and author M. C. Heaven, https://doi.org/10.1103/PhysRevA.106.062804 journal journal Phys. Rev. A volume 106, pages 062804 (year 2022)NoStop [Augenbraun et al.(2023)Augenbraun, Anderegg, Hallas, Lasner, Vilas, and Doyle]Augenbrau2023 author author B. L. Augenbraun, author L. Anderegg, author C. Hallas, author Z. D. Lasner, author N. B. Vilas, and author J. M. Doyle, https://doi.org/10.48550/arXiv.2302.10161 title Direct laser cooling of polyatomic molecules (year 2023)NoStop [Mitra et al.(2022)Mitra, Lasner, Zhu, Dickerson, Augenbraun, Bailey, Alexandrova, Campbell, Caram, Hudson, and Doyle]Mitra2022 author author D. Mitra, author Z. D. Lasner, author G.-Z. Zhu, author C. E. Dickerson, author B. L. Augenbraun, author A. D. Bailey, author A. N. Alexandrova, author W. C. Campbell, author J. R. Caram, author E. R. Hudson, and author J. M. Doyle, https://doi.org/10.1021/acs.jpclett.2c01430 journal journal The Journal of Physical Chemistry Letters volume 13, pages 7029 (year 2022)NoStop [Hutzler et al.(2012)Hutzler, Lu, and Doyle]Hutzler2012 author author N. R. Hutzler, author H. I. Lu, and author J. M. Doyle, https://doi.org/10.1021/cr200362u journal journal Chem. Rev. volume 112, pages 4803 (year 2012)NoStop [Prehn et al.(2016)Prehn, Ibrügger, Glöckner, Rempe, and Zeppenfeld]Prehn2016 author author A. Prehn, author M. Ibrügger, author R. Glöckner, author G. Rempe, and author M. Zeppenfeld, https://doi.org/10.1103/PhysRevLett.116.063005 journal journal Phys. Rev. Lett. volume 116, pages 063005 (year 2016)NoStop [Lim et al.(2015)Lim, Frye, Hutson, and Tarbutt]Lim2015 author author J. Lim, author M. D. Frye, author J. M. Hutson, and author M. R. Tarbutt, https://doi.org/10.1103/PhysRevA.92.053419 journal journal Phys. Rev. A volume 92, pages 053419 (year 2015)NoStop [Jurgilas et al.(2021)Jurgilas, Chakraborty, Rich, Caldwell, Williams, Fitch, Sauer, Frye, Hutson, and Tarbutt]Jurgilas2021 author author S. Jurgilas, author A. Chakraborty, author C. J. H. Rich, author L. Caldwell, author H. J. Williams, author N. J. Fitch, author B. E. Sauer, author M. D. Frye, author J. M. Hutson, and author M. R. Tarbutt, https://doi.org/10.1103/PhysRevLett.126.153401 journal journal Phys. Rev. Lett. volume 126, pages 153401 (year 2021)NoStop [Rugango et al.(2015)Rugango, Goeders, Dixon, Gray, Khanyile, Shu, Clark, and Brown]Rugango2015 author author R. Rugango, author J. E. Goeders, author T. H. Dixon, author J. M. Gray, author N. B. Khanyile, author G. Shu, author R. J. Clark, and author K. R. Brown, https://doi.org/10.1088/1367-2630/17/3/035009 journal journal New Journal of Physics volume 17, pages 035009 (year 2015)NoStop [Tscherbul et al.(2011)Tscherbul, Kłos, and Buchachenko]Tscherbul2011 author author T. V. Tscherbul, author J. Kłos, and author A. A. Buchachenko, https://doi.org/10.1103/PhysRevA.84.040701 journal journal Phys. Rev. A volume 84, pages 040701(R) (year 2011)NoStop [Morita et al.(2017)Morita, Kłos, Buchachenko, and Tscherbul]Morita2017 author author M. Morita, author J. Kłos, author A. A. Buchachenko, and author T. V. Tscherbul, https://doi.org/10.1103/PhysRevA.95.063421 journal journal Phys. Rev. A volume 95, pages 063421 (year 2017)NoStop [Son et al.(2020)Son, Park, Ketterle, and Jamison]Son2020 author author H. Son, author J. J. Park, author W. Ketterle, and author A. O. Jamison, https://doi.org/10.1038/s41586-020-2141-z journal journal Nature volume 580, pages 197 (year 2020)NoStop [Stebbings and Dunning(1983)]Stebbings1983 author author R. Stebbings and author F. Dunning, @noop title Rydberg States of Atoms and Molecules, Essays in nuclear astrophysics (publisher Cambridge University Press, year 1983)NoStop [Gallagher(1994)]Gallagher1994 author author T. F. Gallagher, https://doi.org/10.1017/CBO9780511524530 title Rydberg Atoms, Cambridge Monographs on Atomic, Molecular and Chemical Physics (publisher Cambridge University Press, year 1994)NoStop [Urban et al.(2009)Urban, Johnson, Henage, Isenhower, Yavuz, Walker, and Saffman]Urban2009 author author E. Urban, author T. A. Johnson, author T. Henage, author L. Isenhower, author D. D. Yavuz, author T. G. Walker, and author M. Saffman, https://doi.org/doi.org/10.1038/nphys1178 journal journal Nature Physics volume 5, pages 110 (year 2009)NoStop [Patsch et al.(2022)Patsch, Zeppenfeld, and Koch]Patsch2022 author author S. Patsch, author M. Zeppenfeld, and author C. P. Koch, https://doi.org/10.1021/acs.jpclett.2c02521 journal journal The Journal of Physical Chemistry Letters volume 13, pages 10728 (year 2022)NoStop [Zhelyazkova and Hogan(2017)]Zhelyazkova2017 author author V. Zhelyazkova and author S. D. Hogan, https://doi.org/10.1103/PhysRevA.95.042710 journal journal Phys. Rev. A volume 95, pages 042710 (year 2017)NoStop [Jarisch and Zeppenfeld(2018)]Jarisch2018 author author F. Jarisch and author M. Zeppenfeld, https://doi.org/10.1088/1367-2630/aaf02e journal journal New J. Phys. volume 20, pages 113044 (year 2018)NoStop [Huber and Büchler(2012)]Huber2012 author author S. D. Huber and author H. P. Büchler, https://doi.org/10.1103/PhysRevLett.108.193006 journal journal Phys. Rev. Lett. volume 108, pages 193006 (year 2012)NoStop [Zhao et al.(2012)Zhao, Glaetzle, Pupillo, and Zoller]Zhao2012 author author B. Zhao, author A. W. Glaetzle, author G. Pupillo, and author P. Zoller, https://doi.org/10.1103/PhysRevLett.108.193007 journal journal Phys. Rev. Lett. volume 108, pages 193007 (year 2012)NoStop [Zhang and Tarbutt(2022)]Zhang2022b author author C. Zhang and author M. Tarbutt, https://doi.org/10.1103/PRXQuantum.3.030340 journal journal PRX Quantum volume 3, pages 030340 (year 2022)NoStop [Wang et al.(2022)Wang, Williams, Picard, Yao, and Ni]Wang2022 author author K. Wang, author C. P. Williams, author L. R. Picard, author N. Y. Yao, and author K.-K. Ni, https://doi.org/10.1103/PRXQuantum.3.030339 journal journal PRX Quantum volume 3, pages 030339 (year 2022)NoStop [Kuznetsova et al.(2011)Kuznetsova, Rittenhouse, Sadeghpour, and Yelin]Kuznetsova2011 author author E. Kuznetsova, author S. T. Rittenhouse, author H. R. Sadeghpour, and author S. F. Yelin, https://doi.org/10.1039/C1CP21476D journal journal Phys. Chem. Chem. Phys. volume 13, pages 17115 (year 2011)NoStop [Guttridge et al.(2023)Guttridge, Ruttley, Baldock, González-Férez, Sadeghpour, Adams, and Cornish]Guttridge2023 author author A. Guttridge, author D. K. Ruttley, author A. C. Baldock, author R. González-Férez, author H. R. Sadeghpour, author C. S. Adams, and author S. L. Cornish, https://doi.org/10.48550/arXiv.2303.06126 title Observation of rydberg blockade due to the charge-dipole interaction between an atom and a polar molecule (year 2023)NoStop [Rittenhouse and Sadeghpour(2010)]Rittenhouse2010 author author S. T. Rittenhouse and author H. Sadeghpour, @noop journal journal Physical Review Letters volume 104, pages 243002 (year 2010)NoStop [Rittenhouse et al.(2011)Rittenhouse, Mayle, Schmelcher, and Sadeghpour]Rittenhouse2011 author author S. T. Rittenhouse, author M. Mayle, author P. Schmelcher, and author H. Sadeghpour, @noop journal journal J. Phys. B volume 44, pages 184005 (year 2011)NoStop [Shaffer et al.(2018)Shaffer, Rittenhouse, and Sadeghpour]Shaffer2018 author author J. P. Shaffer, author S. T. Rittenhouse, and author H. R. Sadeghpour, https://doi.org/10.1038/s41467-018-04135-6 journal journal Nature Communications volume 9, pages 1965 (year 2018)NoStop [González-Férez et al.(2015)González-Férez, Sadeghpour, and Schmelcher]GonzalezFerez2015 author author R. González-Férez, author H. Sadeghpour, and author P. Schmelcher, @noop journal journal New Journal of Physics volume 17, pages 013021 (year 2015)NoStop [Aguilera-Fernández et al.(2017)Aguilera-Fernández, Sadeghpour, Schmelcher, and González-Férez]AguileraFernandez2017 author author J. Aguilera-Fernández, author H. R. Sadeghpour, author P. Schmelcher, and author R. González-Férez, https://doi.org/10.1103/PhysRevA.96.052509 journal journal Phys. Rev. A volume 96, pages 052509 (year 2017)NoStop [González-Férez et al.(2021)González-Férez, Shertzer, and Sadeghpour]GonzalezFerez2021 author author R. González-Férez, author J. Shertzer, and author H. Sadeghpour, @noop journal journal Physical Review Letters volume 126, pages 043401 (year 2021)NoStop [Fermi and Teller(1947)]Fermi1947 author author E. Fermi and author E. Teller, https://doi.org/10.1103/PhysRev.72.399 journal journal Phys. Rev. volume 72, pages 399 (year 1947)NoStop [Gaëtan et al.(2009)Gaëtan, Miroshnychenko, Wilk, Chotia, Viteau, Comparat, Pillet, Browaeys, and Grangier]Gaetan2009 author author A. Gaëtan, author Y. Miroshnychenko, author T. Wilk, author A. Chotia, author M. Viteau, author D. Comparat, author P. Pillet, author A. Browaeys, and author P. Grangier, https://doi.org/10.1038/nphys1183 journal journal Nature Physics volume 5, pages 115 (year 2009)NoStop [Devlin and Tarbutt(2016)]Devlin2016 author author J. A. Devlin and author M. R. Tarbutt, https://doi.org/10.1088/1367-2630/18/12/123017 journal journal New J. Phys. volume 18, pages 123017 (year 2016)NoStop [Augenbraun et al.(2021)Augenbraun, Frenett, Sawaoka, Hallas, Vilas, Nasir, Lasner, and Doyle]Augenbraun2021 author author B. L. Augenbraun, author A. Frenett, author H. Sawaoka, author C. Hallas, author N. B. Vilas, author A. Nasir, author Z. D. Lasner, and author J. M. Doyle, https://doi.org/10.1103/PhysRevLett.127.263002 journal journal Phys. Rev. Lett. volume 127, pages 263002 (year 2021)NoStop [Sawaoka et al.(2023)Sawaoka, Frenett, Nasir, Ono, Augenbraun, Steimle, and Doyle]Sawaoka2023 author author H. Sawaoka, author A. Frenett, author A. Nasir, author T. Ono, author B. L. Augenbraun, author T. C. Steimle, and author J. M. Doyle, https://doi.org/10.1103/PhysRevA.107.022810 journal journal Phys. Rev. A volume 107, pages 022810 (year 2023)NoStop [McCarron et al.(2018)McCarron, Steinecker, Zhu, and DeMille]McCarron2018 author author D. J. McCarron, author M. H. Steinecker, author Y. Zhu, and author D. DeMille, https://doi.org/10.1103/PhysRevLett.121.013202 journal journal Phys. Rev. Lett. volume 121, pages 013202 (year 2018)NoStop [Camara et al.(2014)Camara, Kaiser, and Labeyrie]Camara2014 author author A. Camara, author R. Kaiser, and author G. Labeyrie, https://doi.org/10.1103/PhysRevA.90.063404 journal journal Phys. Rev. A volume 90, pages 063404 (year 2014)NoStop [Ho et al.(2020)Ho, Devlin, Rabey, Yzombard, Lim, Wright, Fitch, Hinds, Tarbutt, and Sauer]Ho2020 author author C. J. Ho, author J. A. Devlin, author I. M. Rabey, author P. Yzombard, author J. Lim, author S. C. Wright, author N. J. Fitch, author E. A. Hinds, author M. R. Tarbutt, and author B. E. Sauer, https://doi.org/10.1088/1367-2630/ab83d2 journal journal New J. Phys. volume 22, pages 043031 (year 2020)NoStop [Markson and Sadeghpour(2016)]Markson2016 author author S. Markson and author H. R. Sadeghpour, https://doi.org/10.1088/0953-4075/49/11/114006 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 49, pages 114006 (year 2016)NoStop [Desfran çois et al.(1994)Desfran çois, Abdoul-Carime, Khelifa, and Schermann]Desfranois1994 author author C. Desfran çois, author H. Abdoul-Carime, author N. Khelifa, and author J. P. Schermann, https://doi.org/10.1103/PhysRevLett.73.2436 journal journal Phys. Rev. Lett. volume 73, pages 2436 (year 1994)NoStop [Matsuzawa(1975)]Matsuzawa1975 author author M. Matsuzawa, https://doi.org/10.1088/0022-3700/8/12/022 journal journal Journal of Physics B: Atomic and Molecular Physics volume 8, pages 2114 (year 1975)NoStop [Hamamda et al.(2015)Hamamda, Pillet, Lignier, and Comparat]Hamamda2015 author author M. Hamamda, author P. Pillet, author H. Lignier, and author D. Comparat, https://doi.org/10.1088/1367-2630/17/4/045018 journal journal New Journal of Physics volume 17, pages 045018 (year 2015)NoStop [Buslov and Zon(2012)]Yu2012 author author E. Y. Buslov and author B. A. Zon, https://doi.org/10.1103/PhysRevA.85.042709 journal journal Phys. Rev. A volume 85, pages 042709 (year 2012)NoStop [Lebedev and Narits(2013)]Lebedev2013 author author V. S. Lebedev and author A. A. Narits, https://doi.org/10.1016/j.cplett.2013.07.024 journal journal Chemical Physics Letters volume 582, pages 10 (year 2013)NoStop [Qian et al.(2019)Qian, Zhu, and Wang]Qian2019 author author C.-H. Qian, author G.-Z. Zhu, and author L.-S. Wang, https://doi.org/10.1021/acs.jpclett.9b02679 journal journal The Journal of Physical Chemistry Letters volume 10, pages 6472 (year 2019)NoStop Supplemental Materials: Sympathetic cooling and slowing of molecules with Rydberg atoms In the limit where a diatomic molecule can be treated as a point-like dipole, the Rydberg atom-molecule interaction potential is dominated by the charge-dipole interaction V̂=-d⃗·E⃗^ele where d⃗ is the dipole moment operator for the molecule and E⃗^ele is the electric field of the Rydberg electron at the location of the molecule. In the case of a Λ-doublet molecule such as CH, the dipole moment operator is simply a 2× 2 matrix which represents the coupling of two energetically nearby states of opposite parity that can be coupled by the presence of an external field, and have been studied extensively in previous work <cit.>. In the case of a rigid-rotor like molecule (such as YbOH), the dipole moment operator is expanded in spherical harmonics in the molecular coordinate. The Rydberg electron electric field is expanded in the Basis of Rydberg states <cit.> with a quantization axis along the along the inter-molecular axis giving E^ele_z,n'l',nl = e∑_l”=|l-l'|^l+l'√((2l+1)(2l'+1))[ l' l” l; 0 0 0 ]^2 ×(∫_0^R r^l”/R^l”+2ψ^*_n'l'(r)ψ_nl(r)dr+∫_R^∞R^l”-1/r^l”+2ψ^*_n'l'(r)ψ_nl(r)dr) where · · ·· · · is a 3j-symbol and ψ_nl is a radial Rydberg wavefunction for principle quantum number n and angular momentum l. This form of the matrix element is found through a multipole expansion of the electron electric field. In using this expansion, it is assumed that the molecule is a perfect dipole, with no short-range structure. The contribution of the short-range electron-molecule interaction can be approximated using a contact interaction with a scattering length set by the electron affinity of the molecule <cit.>. As the dipole moment of the molecule increases, an increasing number of Rydberg states are admixed into the electronic state of the system allowing for the Rydberg electron to become more localized. The larger the dipole moment, the the larger then number of Rydberg states that are mixed, further localizing the electron and deepening the Born-Oppenheimer potentials. As the dipole moment approaches the Fermi-Teller critical dipole, d∼ 1.6 Debye <cit.>, the number of Rydberg states needed to converge the potentials sharply increases (as seen in <cit.>). When the dipole moment exceeds the critical value the electron-dipole binding energy diverges, and a short-range repulsive cutoff is necessary to cut-off this unphysical behavior. The inclusion of such a cut-off is beyond the scope of this work. However, we can speculate that the potential depths in such a system would be limited by the depth of the most weekly bound, long-range, electronic state of the dipole-bound molecular anion producing potentials with depth at least in the 10-100 GHz range, ample to produce the large elastic scattering cross section needed for the cooling method described in this work. We also note that the method in ref. <cit.> may be used to estimate the order of magnitude of the interaction potential depth by modeling the molecule as a perturber that dresses the Rydberg electron and acquires an effective charge. This method may be useful for molecules with supercritical dipole moments. 8 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Rittenhouse and Sadeghpour(2010)]Rittenhouse2010 author author S. T. Rittenhouse and author H. Sadeghpour, @noop journal journal Physical Review Letters volume 104, pages 243002 (year 2010)NoStop [Rittenhouse et al.(2011)Rittenhouse, Mayle, Schmelcher, and Sadeghpour]Rittenhouse2011 author author S. T. Rittenhouse, author M. Mayle, author P. Schmelcher, and author H. Sadeghpour, @noop journal journal J. Phys. B volume 44, pages 184005 (year 2011)NoStop [Mayle et al.(2012)Mayle, Rittenhouse, Schmelcher, and Sadeghpour]Mayle2012 author author M. Mayle, author S. Rittenhouse, author P. Schmelcher, and author H. Sadeghpour, @noop journal journal Physical Review A volume 85, pages 052511 (year 2012)NoStop [González-Férez et al.(2015)González-Férez, Sadeghpour, and Schmelcher]GonzalezFerez2015 author author R. González-Férez, author H. Sadeghpour, and author P. Schmelcher, @noop journal journal New Journal of Physics volume 17, pages 013021 (year 2015)NoStop [Kuznetsova et al.(2016)Kuznetsova, Rittenhouse, Sadeghpour, and Yelin]Kuznetsova2016 author author E. Kuznetsova, author S. T. Rittenhouse, author H. R. Sadeghpour, and author S. F. Yelin, https://doi.org/10.1103/PhysRevA.94.032325 journal journal Phys. Rev. A volume 94, pages 032325 (year 2016)NoStop [González-Férez et al.(2021)González-Férez, Shertzer, and Sadeghpour]GonzalezFerez2021 author author R. González-Férez, author J. Shertzer, and author H. Sadeghpour, @noop journal journal Physical Review Letters volume 126, pages 043401 (year 2021)NoStop [Turner(1977)]Turner1977 author author J. E. Turner, https://doi.org/10.1119/1.10767 journal journal American Journal of Physics volume 45, pages 758 (year 1977)NoStop [Giannakeas et al.(2020)Giannakeas, Eiles, Robicheaux, and Rost]Giannakeas2020 author author P. Giannakeas, author M. T. Eiles, author F. Robicheaux, and author J. M. Rost, https://doi.org/10.1103/PhysRevLett.125.123401 journal journal Phys. Rev. Lett. volume 125, pages 123401 (year 2020)NoStop
http://arxiv.org/abs/2307.00992v2
20230703131800
New Method for Measuring the Ratio $μ_p G_E/G_M$ Based on the Polarization Transfer from the Initial Proton to the Final Electron in the $e \vec p \to \vec e p$ Process
[ "M. V. Galynskii", "Yu. M. Bystritskiy", "V. M. Galynsky" ]
hep-ph
[ "hep-ph", "hep-ex" ]
1. ∋[email protected] Institute for Power and Nuclear Research – Sosny, National Academy of Sciences of Belarus, 220109 Minsk, [email protected] Institute for Nuclear Research, 141980 Dubna, Moscow Region, RussiaBelarusian State University, Minsk 220030, Belarus In this letter, we propose a new method for measuring the Sachs form factors ratio (R =μ_p G_E/G_M) based on the transfer of polarization from the initial proton to the final electron in the elastic e p⃗→e⃗ p process, in the case when the axes of quantization of spins of the target proton at rest and of the scattered electron are parallel, i.e., when an electron is scattered in the direction of the spin quantization axis of the proton target. To do this, in the kinematics of the SANE collaboration experiment (2020) on measuring double spin asymmetry in the e⃗p⃗→ e p process, using Kelly (2004) and Qattan (2015) parametrizations, a numerical analysis was carried out of the dependence of the longitudinal polarization degree of the scattered electron on the square of the momentum transferred to the proton, as well as on the scattering angles of the electron and proton. It is established that the difference in the longitudinal polarization degree of the final electron in the case of conservation and violation of scaling of the Sachs form factors can reach 70%. This fact can be used to set up polarization experiments of a new type to measure the ratio R. New Method for Measuring the Ratio μ_p G_E/G_M Based on the Polarization Transfer from the Initial Proton to the Final Electron in the e p⃗→e⃗ p Process V. M. Galynsky August 1, 2023 ========================================================================================================================================================== Introduction.—Experiments on the study of electric (G_E) and magnetic (G_M) proton form factors, the so-called Sachs form factors (SFF), have been conducted since the mid-1950s <cit.> in the process of elastic scattering of unpolarized electrons off a proton. At the same time, all experimental data on the behavior of SFF were obtained using the Rosenbluth technique (RT) based on the use of the Rosenbluth cross section (in the approximation of the one-photon exchange) for the ep → ep process in the rest frame of the initial proton <cit.>dσ/dΩ_e= α^2E_2cos^2(θ_e/2)/4E_1^3sin^4(θ_e/2)1/1+τ_p(G_E^2 +τ_p/ε G_M^2). Here τ_p=Q^2/4m^2, Q^2=4E_1 E_2sin^2(θ_e/2) is the square of the 4-momentum transferred to the proton; m is the mass of the proton; E_1 and E_2 are the energies of the initial and final electrons; θ_e is the electron scattering angle; ε=[1+2(1+τ_p)tan^2(θ_e/2)]^-1 is the degree of linear (transverse) polarization of the virtual photon <cit.>; and α=1/137 is the fine structure constant. For large values of Q^2, as follows from formula (<ref>), the main contribution to the cross section of the ep→ ep process is given by the term proportional to G_M^ 2, which is already at Q^2⩾ 2 ^2 leads to significant difficulties in extracting the contribution of G_E^ 2<cit.>. With the help of RT, the dipole dependence of the SFF on Q^2 in the region Q^2 ⩽ 6 ^2 was established <cit.>. As it turned out, G_E and G_M are related by the scaling ratio G_M≈μ_p G_E (μ_p=2.79– the magnetic moment of the proton), and for their ratio R ≡μ_p G_E/G_M, the approximate equality R ≈ 1 is valid. In the paper of Akhiezer and Rekalo <cit.>, a method for measuring the ratio of R is proposed based on the phenomenon of polarization transfer from the initial electron to the final proton in the e⃗ p→ ep⃗ process. Precision JLab experiments <cit.>, using this method, found a fairly rapid decrease in the ratio of R with an increase in Q^2, which indicates a violation of the dipole dependence (scaling) of the SFF. In the range 0.4 ^2 ⩽ Q^2 ⩽ 5.6 ^2, as it turned out, this decrease is linear. Next, more accurate measurements of the ratio R carried out in <cit.> in a wide area in Q^2 up to 8.5 ^2 using both the Akhiezer–Rekalo method <cit.> and the RT <cit.>, only confirmed the discrepancy of the results. In the SANE collaboration experiment <cit.> (2020), the values of R were obtained by the third method <cit.> by extracting them from the results of measurements of double spin asymmetry in the e⃗p⃗→ e p process in the case, when the electron beam and the proton target are partially polarized. The extracted values of R in <cit.> are consistent with the experimental results <cit.>. In <cit.>, the 4th method of measuring R is proposed based on the transfer of polarization from the initial proton to the final one in the ep⃗→ ep⃗ process in the case when their spins are parallel, i.e. when the proton is scattered in the direction of the quantization axis of the spin of the resting proton target. In this paper, the 5th method of measuring the ratio of R is proposed based on the transfer of polarization from the initial proton to the final electron in the process ep⃗→e⃗ p in the case when their spins are parallel, i.e. when the electron is scattered in the direction of the spin quantization axes of the resting proton target. The helicity and diagonal spin bases.—The spin 4-vector s=(s_0, s) of the fermion with 4-momentum p (p^2=m^2) satisfying the conditions of orthogonality (sp=0) and normalization (s^2 = - 1), is given by s=(s_0, s), s_0= c p/m, s = c + ( c p) p/m(p_0 +m ) , where c (c^2=1) is the axis of spin quantization. Expressions (<ref>) allow us to determine the spin 4-vector s=(s_0, s) by a given 4-momentum p=(p_0, p) and 3-vector c. On the contrary, if the 4-vector s is known, then the spin quantization axis c is given by c = s - s_0/p_0+m p, i.e. c and s for a given p uniquely define each other. At present, the most popular in high-energy physics is the helicity basis <cit.>, in which the spin quantization axis is directed along the momentum of the particle (c = n = p/| p|), while the spin 4-vector (<ref>) defined as s=(s_0, s) =(| v |, v_0 n), where v_0 and v are the time and space components of the 4-velocity vector v=p/m (v^2=1). For the process under consideration e(p_1)+ p (q_1,s_p_1) → e(p_2,s_e_2)+ p (q_2), where p_1, q_1 (p_2, q_2) are the 4-momenta of the initial (final) electrons and protons with masses m_0 and m, it is possible to project the spins of the initial proton and the final electron in one common direction given by <cit.> a = p_2/p_20 - q_1/q_10. Since the common axis of spin quantization (<ref>) defines the spin basis and is the difference of two three-dimensional vectors, the geometric image of which is the diagonal of the parallelogram, it is natural to call it the diagonal spin basis (DSB). In it, the spin 4-vectors of the initial proton s_p_1 and the final electron s_e_2 are given by s_p_1 = m^2 p_2 - ( q_1 p_2)q_1/m√( ( q_1p_2 )^2 - m^2 m_0^2 ), s_e_2 = ( q_1 p_2) p_2 - m_0^2 q_1/m_0√( ( q_1p_2 )^2 - m^2 m_0^2 ). Note that in the papers <cit.> was used the analogous DSB for the initial and final protons (with a common spin quantization axis a = q_2/q_20 - q_1/q_10) <cit.>. In the laboratory frame (LF), where the initial proton rests, q_1=(m, 0), the spin 4-vectors (<ref>), (<ref>) reduces to s_p_1=(0, n_2 ) , s_e_2= (|v_2|, v_20 n_2 ) , where n_2= p_2/| p_2|, v_2=(v_20, v_2)=p_2/m_0. Using the explicit form of the spin 4-vectors s_p_1 and s_e_2 (<ref>) and formulas (<ref>) or (<ref>), it is easy to verify that the quantization axes of the initial proton and the final electron spins in the LF have the same form and coincide with the direction of the final electron momentum a= c = c_p_1 = c_e_2= n_2= p_2/| p_2| . In the ultrarelativistic limit, when the electron mass can be neglected (i.e. at p_10, p_20≫ m_0), the spin 4-vectors (<ref>), (<ref>) reduces to s_p_1 = m^2 p_2 - ( q_1 p_2) q_1/m ( q_1p_2 ), s_e_2 = p_2/ m_0 . Below, in the ultrarelativistic limit, we present the main kinematic relations used in conducting numerical calculations of polarization effects in the ep⃗→e⃗ p process in the LF. Kinematics.—The energies of the final electron E_2 and proton E_2p are connected in the LF with the square of the momentum transferred to the proton Q^2=-q^2, q^2=(q_2-q_1)^2 as follows E_2=E_1-Q^2/2m, E_2p=m+Q^2/2m, E_2=E_1-2mτ_p, E_2p=m(1+2τ_p). The dependence of E_2 and Q^2 on the scattering angle of the electron θ_e in the LF has the form E_2(θ_e) = E_1/1+ (2E_1/m) sin^2 (θ_e/2), Q^2(θ_e) = 4 E_1^ 2sin^2 (θ_e/2)/1+ (2E_1/m) sin^2 (θ_e/2), where cos(θ_e)= p_1 p_2/| p_1| | p_2|. The dependence of E_2p and Q^2 on the scattering angle of the proton θ_p in the LF has the form E_2p(θ_p) = m (E_1+m)^2+E_1^ 2cos^2(θ_p)/(E_1+m)^2-E_1^ 2cos^2(θ_p) , Q^2(θ_p) = 4m^2 E_1^ 2cos^2(θ_p)/(E_1+m)^2-E_1^ 2cos^2(θ_p) , where cos(θ_p)= p_1 q_2/| p_1| | q_2|. The dependence of the scattering angles θ_e and θ_p on E_1 and Q^2 has the form θ_e=arccos(1-Q^2/2 E_1 E_2), θ_p=arccos(E_1+m/E_1√(τ_p/1+τ_p) ). In the elastic ep→ ep process an electron can be scattered by an angle of 0^∘⩽θ_e⩽ 180^∘, while the scattering angle of the proton θ_p varies from 90^∘ to 0^∘<cit.>. Possible values of Q^2 lie in the range 0 ⩽ Q^2 ⩽ Q^2_max, where Q^2_max=4mE_1^ 2/(m+2E_1) . The results of calculations of the dependence of the scattering angles of the electron and proton on the square of the momentum transferred to the proton Q^2 in the e p → e p process at electron beam energies E_1=4.725 and 5.895 in the SANE collaboration experiment <cit.> are presented by the graphs in Figure <ref>. They correspond to lines with labels θ_e4, θ_p4 and θ_e5, θ_p5. Information about the electron and proton scattering angles (in radians) at electron beam energies and values of Q^2 in the experiment <cit.> is presented in Table <ref>. It also contains the corresponding values for Q^2_max (<ref>). Cross section of the ep⃗→e⃗ p process.—In the one-photon exchange approximation, the differential cross section of the process (<ref>), calculated in an arbitrary reference frame in the DSB (<ref>), (<ref>), reads d σ_e p⃗→e⃗ p /d t = πα^2 /2λ_s (1+τ_p) |T|^2/t^2, |T|^2 = I_0+λ_p_1λ_e_2 I_1, I_0 = G_E^ 2 Y_1 + τ_p G_M^ 2 Y_2, I_1 = τ_p ( G_E G_M Y_3 + G_M^ 2 Y_4), where t=q^2, λ_s=4((p_1q_1)^2-m_0^2 m^2), λ_p_1 (λ_e_2) – the degree of polarization of the initial proton (of the final electron). Here the functions Y_i (i=1, … 4) defined as Y_1 = (p_+q_+)^2+q_+^2q_-^2, Y_2 = (p_+ q_+)^2-q_+^2(q_-^2+4 m_0^2), -Y_3 = 2 κ m^2 ((p_+q_+)^2+q_+^2(q_-^2-4 m_0^2 )) z^2, Y_4 = 2 (m^2 p_+ q_+ -κ q_+^2)(κ p_+ q_+ - m_0^2 q_+^2) z^2,    z= ( κ ^2 - m^2 m_0^2 )^-1/2 , κ=q_1p_2 . Expression (<ref>) for |T|^2 can be written as |T|^2=I_0+λ_p_1λ_e_2 I_1=I_0 (1+λ_e_2λ_e_2^f). Then the value of λ_e_2^f in (<ref>) is the longitudinal polarization degree transferred from the initial proton to the final electron in the ep⃗→e⃗ p process λ_e_2^f=λ_p_1I_1/I_0=λ_p_1τ_p (G_E G_M Y_3 + G_M^ 2 Y_4)/G_E^ 2 Y_1 + τ_p G_M^ 2 Y_2. Dividing the numerator and denominator in the last expression by Y_1 G_M^ 2 and introducing the experimentally measured ratio R≡μ_p G_E/G_M, we get λ_e_2^f=λ_p_1 μ_p τ_p ( (Y_3/Y_1) R+ μ_p (Y_4/Y_1))/ R^ 2 + μ_p^2 τ_p (Y_2/Y_1) . Inverting relation (<ref>), we obtain a quadratic equation with respect to R: α_0 R^2 -α_1 R + α_0 α_3-α_2=0 with the coefficients: α_0=λ_e_2^f / λ_p_1, α_1=τ_p μ_p Y_3/Y_1, α_2=τ_p μ_p^2 Y_4/Y_1, α_3=τ_p μ_p^2 Y_2/Y_1. Solutions to equation (<ref>) have the form: R=α_1 ±√(α_1^2-4α_0(α_0α_3-α_2))/2α_0. They allow us to extract the ratio R from the results of an experiment to measure the polarization transferred to the electron λ_e_2^f in the ep⃗→e⃗ p process in the case when the scattered electron moves in the direction of the spin quantization axis of the initial resting proton. In the ultrarelativistic limit, when the electron mass can be neglected, expressions (<ref>)–(<ref>) for Y_i (i=1, … 4) in LF are given by Y_1 =8m^2 (2 E_1E_2 - m E_- ), Y_2=8 m^2 (E_1^ 2 + E_2^ 2 + m E_-), Y_3=-(2m/E_2) Y_1 , Y_4=8 m^2 E_+ E_- (m-E_2)/E_2, where E_±=E_1 ± E_2. The formulas (<ref>)–(<ref>) were used to numerically calculate the Q^2-dependence of the longitudinal polarization degree of the scattered electron λ_e_2^f (<ref>) as well as the dependencies on the scattering angles of the electron and proton at electron beam energies (E_1=4.725 and 5.895) and the polarization degree of the proton target (P_t=λ_p_1=0.70) in the SANE collaboration experiment <cit.> as while conserving the scaling of the SFF in the case of a dipole dependence (R=R_d=1), and in case of its violation. In the latter case, the parametrization R=R_j from the paper <cit.> was used R_j = (1+0.1430 Q^2-0.0086 Q^4+0.0072 Q^6)^-1,    and also the parametrization of Kelly from <cit.>, formulas for which (R=R_k) we omit. The calculation results are presented by graphs in Figures <ref>, <ref>. Note that in these figures there are no lines corresponding to the parametrization of Kelly <cit.> since calculations using R_j and R_k give almost identical results. Results of numerical calculations.—Q^2-dependence of the longitudinal polarization degree of the scattered electron λ_e_2^f (<ref>) at the electron beam energies in the experiment <cit.> is presented by graphs in Figure <ref>, on which the lines Pd4, Pd5 (dashed) and Pj4, Pj5 (solid) are constructed for R=R_d and R=R_j (<ref>). At the same time, the red lines Pd4, Pj4 and the blue lines Pd5, Pj5 correspond to the energy of the electron beam E_1=4.725 and 5.895. For all lines in Figure <ref> the degree of polarization of the proton target P_t=0.70. As can be seen from Figure <ref>, the function λ_e_2^f(Q^2) (<ref>) takes negative values for most of the allowed values and has a minimum for some of them. On a smaller part of the allowed values adjacent to Q^2_max and amounting to approximately 9% of Q^2_max, it takes on positive values. At the boundary of the spectrum at Q^2=Q^2_max, the polarization transferred to the electron is equal to the polarization of the proton target, λ_e_2^f(Q^2_max)=P_t=0.70. Figure <ref> shows the angular dependence of the transferred to the electron polarization λ_e_2^f (<ref>) in the ep⃗→e⃗ p process at electron beam energies 4.725 and 5.895 in the experiment <cit.>. The degree of polarization of the proton target was taken the same for all lines: P_t=0.70. Panels (a) and (b) correspond to the dependence on the scattering angles of the electron (θ_e) and proton (θ_p), expressed in degrees. The parametrizations of Qattan <cit.> and Kelly <cit.> allow us to calculate the relative difference Δ_dj between the polarization effects in the ep⃗→e⃗ p process in the case of conservation and violation of the SFF scaling, as well as in the effects between these parametrizations Δ_jk: Δ_dj=|Pd-Pj/Pd|, Δ_jk=|Pj - Pk/Pj|, where P_d, P_j and P_k are the polarizations calculated by formula (<ref>) for λ_e_2^f when using the corresponding parametrizations R_d, R_j and R_k. The results of calculations of Δ_dj at electron beam energies of 4.725 and 5.895 are shown in Figure <ref>. It follows from the graphs in Figure <ref> that the relative difference between the polarization transferred from the initial proton to the final electron in the ep⃗→e⃗ p process in the case of conserving and violation of the scaling of the SFF can reach 70%, which can be used to set up a polarization experiment by measuring the ratio R. Numerical values of the polarization transferred to the final electron in the ep⃗→e⃗ p process for the three considered parametrizations of the ratio R at E_1 and Q^2 used in the experiment <cit.>, are presented in Table <ref>. In it, the columns of values P_d, P_j and P_k correspond to the dipole dependence R_d, parametrizations R_j (<ref>) and R_k<cit.>; columns Δ_dj, Δ_jk correspond to the relative difference (<ref>) (expressed in percent) at electron beam energies of 4.725 and 5.895 and two values of Q^2 equal to 2.06 and 5.66 ^2. It follows from Table <ref> that the relative difference between Pj5 and Pd5 at Q^2=2.06 ^2 is 4.1% and between Pj4 and Pd4 it is 4.8%. At Q^2 = 5.66 ^2, the difference increases and becomes equal to 14.9 and 21.7%, respectively. Note that the relative difference Δ_jk between P_j and P_k for all E_1 and Q^2 in Table <ref> is less than 1%. Conclusion.—In this paper, we have considered a possible method for measuring the ratio R ≡μ_p G_E/G_M based on the transfer of polarization from the initial proton to the final electron in the ep⃗→e⃗ p process, in the case when their spins are parallel, i.e. when an electron is scattered in the direction of the spin quantization axis of the resting proton target. For this purpose, in the kinematics of the SANE collaboration experiment <cit.>, using the parametrizations of Qattan <cit.> and Kelly <cit.>, a numerical analysis was carried out of the dependence of the degree of polarization of the scattered electron on the square of the momentum transferred to the proton, as well as from the scattering angles of the electron and proton. As it turned out, the parametrizations of Qattan <cit.> and Kelly <cit.> give almost identical results in calculations. It is established that the difference in the degree of longitudinal polarization of the final electron in the case of conservation and violation of the SFF scaling can reach 70 %, which can be used to conduct a new type of polarization experiment to measure the ratio R. At present, an experiment to measure the longitudinal polarization degree transferred to an unpolarized electron in the ep⃗→e⃗ p process when it is scattered in the direction of the spin quantization axis of a resting proton seems quite real since a proton target with a high degree of polarization P_t = 70 ± 5% was created in principle and has already been used in the experiment <cit.>. For this reason, it would be most appropriate to conduct the proposed experiment at the setup used in <cit.> at the same P_t=0.70, electron beam energies E_1=4.725 and 5.895. The difference between conducting the proposed experiment and the one in <cit.> consists in the fact that an incident electron beam must be unpolarized, and the detected scattered electron must move strictly along the direction of the spin quantization axis of the proton target. In the proposed experiment, it is necessary to measure only the longitudinal polarization degree of the scattered electron, which is an advantage compared to the method <cit.> used in JLab-experiments. Acknowledgements.—This work was carried out within the framework of scientific cooperation Belarus-JINR and State Program of Scientific Research “Convergence-2025” of the Republic of Belarus under Projects No. 20221590 and No. 20210852. 60Hofstadter1958 R. Hofstadter, F. Bumiller, and M. R. Yearian, https://link.aps.org/doi/10.1103/RevModPhys.30.482Rev. Mod. Phys. 30, 482 (1958). Rosen M. N. Rosenbluth, https://doi.org/10.1103/PhysRev.79.615Phys. Rev. 79, 615 (1950). Dombey N. Dombey, https://doi.org/10.1103/RevModPhys.41.236Rev. Mod. Phys. 41, 236 (1969).Rekalo74 A. I. Akhiezer and M. P. Rekalo, Sov. J. Part. Nucl. 4, 277 (1974) (Phys. Element. Chastits Atom. Yadra. 4, 662 (1973) [in Russian]). AR A. I. Akhiezer and M. P. Rekalo, Electrodynamics of Hadrons (Naukova Dumka, Kiev, 1977) [in Russian]. GL97 M. V. Galynskii and M. I. Levchuk, https://www.researchgate.net/publication/232905696_Polarization_of_a_virtual_photon_in_the_reaction_ep_epg_e_p_—_e_XPhys. At. Nucl. 60, 1855 (1997) (Yad. Fiz. 60, 2028 (1997) [in Russian]). ETG15 S. Pacetti, R. Baldini Ferroli, and E. Tomasi-Gustafsson, https://doi.org/10.1016/j.physrep.2014.09.005Phys. Rept. 550-551, 1 (2015). Punjabi2015 V. Punjabi, C. F. Perdrisat, M. K. Jones, E. J. Brash, and C. E. Carlson, https://doi.org/10.1140/epja/i2015-15079-xEur. Phys. J. A 51, 79 (2015). Jones00 M. K. Jones, K. A. Aniol, F. T. Baker et al., https://dx.doi.org/10.1103/PhysRevLett.84.1398Phys. Rev. Lett. 84, 1398 (2000).Gay01 O. Gayou, K. Wijesooriya, A. Afanasev et al., https://dx.doi.org/10.1103/PhysRevC.64.038202Phys. Rev. C 64, 038202 (2001).Gay02 O. Gayou, K. A. Aniol, T. Averett et al., https://dx.doi.org/10.1103/PhysRevLett.88.092301Phys. Rev. Lett. 88, 092301 (2002).Pun05 V. Punjabi, C. F. Perdrisat, K. A. Aniol et al., https://doi.org/10.1103/PhysRevC.71.055202Phys. Rev. C 71, 055202 (2005). Puckett10 A. J. R. Puckett, E. J. Brash, M. K. Jones et al., https://doi.org/10.1103/PhysRevLett.104.242301Phys. Rev. Lett. 104, 242301 (2010).Puckett12 A. J. R. Puckett, E. J. Brash, O. Gayou et al., https://doi.org/10.1103/PhysRevC.85.045203Phys. Rev. C 85, 045203 (2012).Puckett17 A. J. R. Puckett, E. J. Brash, M. K. Jones et al., https://doi.org/10.1103/PhysRevC.96.055203Phys. Rev. C 96, 055203 (2017). Qattan2005 I. A. Qattan, J. Arrington, R. E. Segel et al., https://doi.org/10.1103/PhysRevLett.94.142301Phys. Rev. Lett. 94, 142301 (2005).Liyanage2020 A. Liyanage, W. Armstrong, H. Kang et al., https://doi.org/10.1103/PhysRevC.101.035206Phys. Rev. C 101, 035206 (2020). Donnelly1986 T. W. Donnelly and A. S. Raskin, https://doi.org/10.1016/0003-4916(86)90173-9Ann. Phys. 169, 247 (1986). JETPL2008 M. V. Galynskii, E. A. Kuraev, and Yu. M. Bystritskiy, https://doi.org/10.1134/S0021364008200034JETP Lett. 88, 481 (2008), https://arxiv.org/abs/0805.0233arXiv:0805.0233 [hep-ph]. JETPL18 M. V. Galynskii, https://doi.org/10.1134/S0021364019010089JETP Lett. 109, 1 (2019), https://arxiv.org/abs/1910.05267arXiv: 1910.05267 [hep-ph]. JETPL19 M. V. Galynskii and R. E. Gerasimov, https://doi.org/10.1134/S0021364019220077JETP Lett. 110, 646 (2019), https://arxiv.org/abs/2004.07896arXiv:2004.07896 [hep-ph]. JETPL2021 M. V. Galynskii, https://doi.org/10.1134/S0021364021090083JETP Lett. 113, 555 (2021), https://arxiv.org/abs/2107.08503arXiv:2107.08503 [hep-ph]. PEPAN2022 M. V. Galynskii, https://doi.org/10.1134/S1547477122010058Phys. Part. Nucl. Lett. 19, 26 (2022), https://arxiv.org/abs/2112.12022arXiv:2112.12022 [nucl-ex]. JETPL2022 M. V. Galynskii, https://doi.org/10.1134/S0021364022601804JETP Lett. 116, 420 (2022)https://arxiv.org/abs/2212.13431arXiv:2212.13431 [hep-ph]. Jacob M. Jacob and G. Wick, https://doi.org/10.1016/0003-4916(59)90051-XAnn. Phys. 7, 404 (1959). FIF70 F. I. Fedorov, https://doi.org/10.1007/BF01038044Theor. Math. Phys. 2, 248 (1970). GL F. I. Fedorov, The Lorentz Group (Nauka, Moscow, 1979) [in Russian]. Sik84 S. M. Sikach, Vestsi Akad. Nauk BSSR, Ser. Fiz. Math. Nauk, 2, 84 (1984) [in Russian]. GS98 M. V. Galynskii, S. M. Sikach, https://www.researchgate.net/publication/232769273_The_diagonal_spin_basis_and_calculation_of_processes_involving_polarized_particlesPhys. Part. Nucl. 29, 469 (1998), https://arxiv.org/abs/hep-ph/9910284arXiv:9910284 [hep-ph]. BLP V. B. Berestetskii, E. M. Lifshits, L. P. Pitaevskii, Course of Theoretical Physics, Vol. 4: Quantum Electrodynamics (Nauka, Moscow, 1989; Pergamon, Oxford, 1982). Qattan2015 I. A. Qattan, J. Arrington, and A. Alsaad, https://dx.doi.org/10.1103/PhysRevC.91.065203 Phys. Rev. C 91, 065203 (2015). Kelly2004 J. J. Kelly, https://dx.doi.org/10.1103/PhysRevC.70.068202 Phys. Rev. C 70, 068202 (2004).
http://arxiv.org/abs/2307.01286v3
20230703182850
Theories for correlating impedance computations with beam-based measurements in electron storage rings
[ "Demin Zhou", "Takuya Ishibashi", "Gaku Mitsuka" ]
physics.acc-ph
[ "physics.acc-ph" ]
[email protected] KEK, 1-1 Oho, Tsukuba 305-0801, Japan School of Accelerator Science, The Graduate University for Advanced Studies, SOKENDAI, Shonan Village, Hayama, Kanagawa 240-0193 Japan KEK, 1-1 Oho, Tsukuba 305-0801, Japan School of Accelerator Science, The Graduate University for Advanced Studies, SOKENDAI, Shonan Village, Hayama, Kanagawa 240-0193 Japan KEK, 1-1 Oho, Tsukuba 305-0801, Japan School of Accelerator Science, The Graduate University for Advanced Studies, SOKENDAI, Shonan Village, Hayama, Kanagawa 240-0193 Japan As a stationary solution of the Vlasov-Fokker-Planck equation, the Haissinski equation predicts the equilibrium line density of a bunch circulating in a storage ring for a given wake function. This paper shows that a few equations can be derived from the Haissinski equation in a self-consistent manner. These equations can be applied to electron storage rings to bridge the gap between impedance computations and beam-based measurements. Theories for correlating impedance computations with beam-based measurements in electron storage rings Gaku Mitsuka August 1, 2023 ====================================================================================================== Modern particle accelerators are designed to deliver high-intensity and high-brightness charged beams for experiments in a wide range of fields, from high-energy particle physics to material science. The interactions between charged beams and their surroundings, which are mediated by beam-induced electromagnetic fields and commonly described by the concepts of wakefield and its Fourier transform impedance <cit.>, are crucial factors that limit beam quality. During an accelerator project's design and construction phases, creating impedance budgets to predict impedance-driven beam instabilities reliably is essential. During the commissioning phase, beam-based measurements are typically performed to verify the accuracy of impedance budgets <cit.>. In recent decades, various theories and simulation tools have been developed to translate impedance calculations into beam-based measurements, or vice versa. Here, we show that, from the Haissinski equation <cit.>, a few equations can be derived to assist in connecting impedance computations to beam-based measurements. For electron storage rings, the high intensity circulating beam can be modeled as a continuous distribution ψ, with its evolution governed by the Vlasov-Fokker-Planck (VFP) equation <cit.>. Specifically, when considering the synchrotron motion, the VFP equation is <cit.> ∂ψ/∂ s +d z/d s∂ψ/∂ z +d δ/d s∂ψ/∂δ =2/ct_d∂/∂δ[ δψ + σ_δ^2 ∂ψ/∂δ]. Here, s is the arc length on the beam's closed orbit, z=s-s_0=s-ct is the longitudinal displacement from the synchronous particle at s=s_0=ct, δ=(P-P_0)/P_0 is the momentum deviation, σ_δ is the momentum spread, and t_d is the longitudinal damping time. The equations of motion, including wakefields, are dz/ds=-ηδ, dδ/ds= ω_s^2/η c^2z - F(z,s), with η the slip factor and ω_s the synchrotron frequency. The wakefield term F(z) is calculated from the convolution of charge density and longitudinal wake function F(z,s)=I_n ∫_-∞^∞ W_∥(z-z')λ(z',s) dz', with the scaling factor I_n=Ne^2/(EC_0) and the line density distribution λ(z,s)=∫_-∞^∞ψ(z,δ,s)dδ. Here, N is the bunch population, C_0 is the circumference of the storage ring, and E is the energy of the reference particle. The wake function W_∥(z) has various forms in the literature depending on the conventions chosen. In this paper, we follow the conventions of <cit.>. The wake function and the corresponding impedance are the Fourier transforms of each other: W_∥(z)=c/2π∫_-∞^∞ Z_∥(k) e^ikz dk and Z_∥(k)=1/c∫_-∞^∞ W_∥(z) e^-ikz dz. With synchrotron radiation effects but without wakefields, the beam in an electron storage ring is bunched with an equilibrium bunch length σ_z0 given by σ_z0=-cησ_δ/ω_s (note that ω_s<0 when η>0 <cit.>). The VFP equation has an s-independent solution ψ_0(z,δ) in the form of ψ_0(z,δ)=ψ̂_0(δ) λ_0(z) below the microwave instability threshold. The momentum distribution ψ̂_0(δ) is Gaussian and the spatial distribution satisfies dλ_0(z)/dz+ [ z/σ_z0^2-1/ησ_δ^2 F_0(z) ] λ_0(z) =0. Here F_0(z,s) is from Eq. (<ref>) with λ(z,s) replaced by the equilibrium distribution λ_0(z). The solution of Eq. (<ref>) is the so-called Haissinski equation <cit.> λ_0(z)= A e^-z^2/2σ_z0^2-I/σ_z0∫_z^∞dz'∫_-∞^∞ W_∥(z'-z”)λ_0(z”) dz”, with the new scaling parameter I=I_nσ_z0/(ησ_δ^2). With the synchrotron tune defined by ν_s=ω_s/ω_0, there is I=-Ne^2/(2πν_sEσ_δ) <cit.>. The convolution of the wake function and the line density is recognized as the wake potential, that is, 𝒲_∥(z)=∫_-∞^∞ W_∥(z-z')λ_0(z') dz'. The stability of the Hassinski equation is beyond the scope of this paper, and the reader is referred to <cit.> and references therein. Integrating over z on both sides of Eq. (<ref>) and recognizing that the center of mass of the bunch is z_c=∫_-∞^∞ zλ_0(z) dz, we obtain z_c(I)=Iσ_z0κ_∥, with the well-known loss factor κ_∥(I)=∫_-∞^∞ dz λ_0(z)𝒲_∥(z). The rate of centroid shift at I=0 is given by m_1≡ dz_c/dI|_I=0=σ_z0κ_∥(0). In terms of impedance, there is κ_∥=c/π∫_0^∞Re[Z_∥(k)] h(k) dk with spectral power density h(k)=λ̃_0(k)λ̃_0^*(k) where λ̃_0(k) is the Fourier transform of λ_0(z). Consequently, we find m_1=cσ_z0/π∫_0^∞Re[Z_∥(k)] e^-k^2σ_z0^2 dk with the impedance property Z_∥^*(k)=Z_∥(-k) <cit.>. The distribution peak, the relative maximum of λ_0(z), is of interest from a measurement point of view. The beam position monitors (BPMs) in storage rings monitor the beam's electromagnetic fields and output a signal with its voltage roughly proportional to dλ_0(z)/dz (for example, see Ref. <cit.>). Detecting the zero-crossing point of a BPM signal can give information on the peak position of the density distribution. From Eq. (<ref>), taking dλ_0(z)/dz=0 yields the peak position of the bunch profile z_m=Iσ_z0𝒲_∥(z_m). The bunch profile can have single or multiple peaks depending on the impedance properties and bunch current. The rate of shift of the peak at I=0 is given by m_2≡ dz_m/dI|_I=0=σ_z0𝒲_∥(0). In terms of impedance, there is m_2=cσ_z0/π∫_0^∞Re[Z_∥(k)] e^-k^2σ_z0^2/2 dk. Since Re[Z_∥(k)]≥ 0 for any k, there is m_2>m_1, suggesting that the distribution peak is shifting faster than the center of mass. In particular, consider a purely resistive impedance Z_∥(k)=R, there is m_2=√(2)m_1. Given the bunch profile λ_0(z), the rms bunch length can be calculated by σ_z^2=∫_-∞^∞ (z-z_c)^2λ_0(z)dz =∫_-∞^∞ z^2λ_0(z)dz-z_c^2. We show how to calculate σ_z from Eq. (<ref>). Multiplying z on both sides of this equation and performing integration over z, we can obtain three terms. The first term is a constant -1 (consider that λ_0(z) decays exponentially as e^-z^2/(2σ_z0^2) when z→±∞). The second term equals σ_z^2+z_c^2. The third term is an integration that contains the wake function. Combining the three terms, we can arrive at an equation that describes the potential-well bunch lengthening x^2-1-cI/2πσ_z0 Z_∥^eff(x)=0, where x=σ_z/σ_z0 is the bunch lengthening factor, and the term Z_∥^eff is formulated by Z_∥^eff = 2π/c∫_-∞^∞ dz (z-z_c) λ_0(z) 𝒲_∥(z). In terms of impedance, it is equivalent to Z_∥^eff= -∫_-∞^∞ dk Z_∥(k) λ̃_0(k) [ id/dkλ̃_0^*(k) + z_c λ̃_0^*(k) ]. Here, we define Z_∥^eff as an effective impedance, which is always real but not complex, to indicate bunch lengthening. As a straightforward corollary of the Haissinski equation, Eq. (<ref>) shows that the term Z_∥^eff is simply a quadratic function of x, while x obviously is a function of the normalized current I (that is, x=x(I)). Equation (<ref>) indicates that when the density distribution is deformed, both the real and imaginary parts of the impedance contribute to the bunch lengthening, although the imaginary part is usually the dominant source. The reader should not be confused with the effective impedance (Z_0^∥/ω)_eff, which measures the shift in the complex mode frequencies, for the instability theory in the storage rings <cit.>. In fact, by expanding the density spectrum λ̃_0(k) into the sum of azimuthal and radial modes <cit.>, our formulation can be connected to the conventional formulation of effective impedance. However, the details will be reserved for a later paper. At I=0, the density distribution is Gaussian with z_c=0 and x(0)=1. The bunch lengthening rate at I=0 is given by m_3≡ dx/dI|_I=0=c/4πσ_z0Z_∥^eff(1). Here, the effective impedance Z_∥^eff(1) is given by Eq. (<ref>) with z_c=0 and λ̃_0(k)=e^-k^2σ_z0^2/2. It only depends on the imaginary part of Z_∥(k), suggesting that the inductive part of the impedance solely determines the lengthening rate of bunch length at zero current. The relation between Z_∥^eff and the normalized current I is complicated. Here, we only give the slope at I=0 as dZ_∥^eff/dI|_I→ 0=2πσ_z0/c[ m_3^2 + x”] with x”=d^2x/dI^2 to be determined. For some well-defined impedances, such as a pure inductance L, a pure resistance R, and a pure capacitance C, the effective impedance can be explicitly formulated as follows. For a purely inductive impedance Z_∥(k)=-ikcL, there are W_∥(z)=-c^2Lδ'(z), z_c=0 <cit.>, and Z_∥^eff=π cL ∫_-∞^∞λ_0^2(z) dz with the Dirac delta function δ(z). For a purely resistive impedance Z_∥(k)=R, there are W_∥(z)=cRδ(z) and Z_∥^eff=2π R ∫_-∞^∞ (z-z_c)λ_0^2(z) dz. For a purely capacitive impedance Z_∥(k)=i/(kcC), there are W_∥(z)=1/CH(-z) and Z_∥^eff=2π/cC∫_-∞^∞ (z-z_c)λ_0(z) Λ_0(z) dz with H(z) the Heaviside step function and Λ_0(z)=∫_z^∞λ_0(z)dz an accumulation function. The Haissinski distributions for these impedances have been well investigated in the literature <cit.>. They can be used as input to calculate the corresponding effective impedance Z_∥^eff numerically. For an absolute impedance, one has to solve the Haissinski equation numerically, obtain the equilibrium distribution λ_0(z), and then calculate the effective impedance. Further calculations can be performed when a Gaussian distribution with rms length σ_z and center of mass z_c is used to approximate the Haissinski distribution. Then, the effective impedance is explicitly written as follows, from Eq. (<ref>): Z_∥^eff=-2σ_z^2 ∫_0^∞ dk k Im[Z_∥(k)] e^-k^2σ_z^2. In this case, only the imaginary part of the raw impedance contributes to the effective impedance. For some well-established impedance models in the literature, the explicit formulas of effective impedance and centroid shift are summarized in Table <ref>. They can be applied to Eq. (<ref>) to predict the lengthening of the bunch. For a purely inductive impedance, it gives the popular cubic Zotter's equation: x^3-x-D=0 for potential-well bunch lengthening with D=c^2IL/4√(π)σ_z0^2 =cI_bL/4√(π)ησ_z0σ_δ^2(E/e). Here, we show that the numerical constant in the denominator of D is exactly 4√(π), as suggested by some authors (see Ref. <cit.> and references therein), but not √(2π) as formulated in Ref. <cit.>. Inductive impedance usually dominates total impedance in modern electron storage rings, where the vacuum chambers are usually well-smoothed. Consequently, the cubic scaling law Eq. (<ref>) fairly fits many measurements. The cubic equation has also been widely used to estimate effective inductance L_eff <cit.> when bunch lengths are measured <cit.>. By connecting Eqs. (<ref>) and (<ref>), for an absolute impedance, the effective inductance at zero bunch current can be calculated by L_eff=2σ_z0/√(π)cZ_∥^eff(1). This formulation is equivalent to Eq. (5) of Ref. <cit.>, and Eq. (5) of Ref. <cit.> where h(ω) should be replaced by h_1(ω)=(ωσ_z0/c)^2e^-ω^2σ_z0^2/c^2 as suggested in <cit.>. It is important to remind the reader that Eq. (<ref>) implicitly assumes that the lengthening of the bunch in a storage ring is fully attributed to pure inductance. In general, this is not the case and may pose a challenge when comparing the experimental results with direct bottom-up computations of impedances and impedance effects. An improvement to the cubic equation Eq. (<ref>) is to rewrite D as D(I)=IG(I). The function G of I is a measure of the impedance characteristics of a storage ring. Especially when the inductive impedance dominates the lengthening of the bunch, G(I) should show a weak dependence on the normalized current I (for example, see the case of SuperKEKB LER in Fig. <ref>). Given a bunch profile obtained from simulations or beam-based measurements, one may be interested in extracting the impedance from Eq. (<ref>). This leads to the inverse problem of the Haissinski equation. From Eq. (<ref>), the wake potential can be calculated from the line density as 𝒲_∥(z)=σ_z0/I[ dlnλ_0(z)/dz + z/σ_z0^2]. Taking Fourier transform to both sides of the above equation, one can calculate the impedance as <cit.> Z_∥(k)= σ_z0/Ic^2λ̃_0(k)∫_-∞^∞[ dlnλ_0(z)/dz+z/σ_z0^2] e^-ikz dz. The above equation is equivalent to Z_∥(k)= ikσ_z0/Ic^2λ̃_0(k)∫_-∞^∞[ lnλ_0(z)+z^2/2σ_z0^2] e^-ikz dz. Calculating the wake potential from the deformed line density is relatively straightforward. However, extracting the impedance from the wake potential is known to be a deconvolution problem. Because of the nature of all deconvolution problems, the inverse problem of the Haissinski equation is mathematically well-defined but ill-posed. The problem is sensitive to errors that are always present in the simulated or measured data for λ_0(z). Practically, it can be challenging to retrieve accurate impedance data at frequencies k≫ 1/σ_z0 using Eq. (<ref>) or (<ref>). However, if the bunch profile is reliable, it is possible to obtain impedance data with good accuracy at frequencies k≲ 1/σ_z0. As a specific example, we consider the low-energy ring (LER) of SuperKEKB. Longitudinal wakes for various components have been calculated using a Gaussian driving bunch with length σ̂_z=0.5 mm and summed to create the pseudo-Green function wake <cit.> as shown in Fig. <ref>. The corresponding impedances shown in the same figure are calculated by Fourier transforming the wake data (indeed, the chirp-Z transform is used to improve the frequency revolution). The decay of impedance data at high frequencies is due to the chosen 0.5-mm Gaussian bunch, which acts as a Gaussian window function of e^-k^2σ̂_z^2/2. This is justified when the ratio of the nominal length of the bunch σ_z0 to the length of the driving bunch σ̂_z is very large (for our example, 9.2). Additionally, the bunch should not be strongly deformed or micro-bunched, so the high-frequency impedances are not sampled. The beam parameters used to solve the Haissinski equation are beam energy E=4.0 GeV, ring circumference C_0=3016.315 m, slip factor η=2.969× 10^-4, bunch length at zero current σ_z0=4.6 mm, momentum spread σ_δ=7.53× 10^-4, and synchrotron tune ν_s=-0.0233. The scaling parameter at N=10^11 is I=0.0364 pC/V. The numerically obtained Haissinski solutions <cit.> with bunch populations N=(0.0275, 5.49, 11.0, 16.5)× 10^10 (corresponding to I=(0.0001, 0.02, 0.04, 0.06) pC/V) are shown in Fig. <ref>. The lengthening of the bunch, the centroid shifts, and the density peak as a function of the normalized current are shown in Fig. <ref>. The effective impedances calculated from the quadratic and cubic scaling laws are plotted in Fig. <ref>. It is seen that inductive impedance largely dominates bunch lengthening in SuperKEKB LER, as can also be seen from the bunch profiles in Fig. <ref>. The wake potentials and impedances extracted from the Haissinski solutions in different bunch populations are compared in Figs. <ref> and <ref>. The extracted impedance does not contain the oscillatory impedance at low frequencies (|k|≲ 200 m^-1 in Fig. <ref>). This is because the bunch does not experience long-range wake fields at |z| ≫σ_z0, as shown in Fig. <ref>. Ideally, the bunch profiles of different currents should produce identical impedances at any frequency. However, the divergence of impedances at high frequencies can be seen in Fig. <ref>, demonstrating the impact of numerical noise on the simulated bunch profiles. Furthermore, noise effects can also be significant when the deformations of bunch profiles are weak at low currents. This can be seen from Eq. (<ref>): The sum within the square brackets approaches zero when I→ 0. Although the right-hand side should eventually converge, numerical noise could cause a divergence at I=0. The divergence of extracted impedance at high frequencies also implies that the numerical noises in the bunch profile, but not the physical micro-bunching, have to be well controlled in Vlasov or macroparticle tracking simulations. Otherwise, the noise (corresponding to the high-frequency spectrum) may sample the high-frequency impedance and drive numerical instabilities. In other words, including physical high-frequency impedances as much as possible in impedance modeling will challenge simulation codes in their techniques of controlling numerical noises, such as a large number of macroparticles, fine meshing, smoothing algorithms, etc. In summary, the Haissinski equation, a self-consistent solution of the VFP equation below the microwave instability threshold, has been used to derive a few theories to describe the intensity-dependent behaviors of some measurable quantities. The effective impedance used to describe bunch lengthening has simply been redefined. Measurements using BPMs or streak cameras can be used to extract a lot of information about impedances. In principle, streak camera measurements can simultaneously predict loss factors and effective impedance. In particular, if the noises in the measured line density of a single bunch are well suppressed, it is possible to extract frequency-dependent impedances, as we have demonstrated using numerically obtained Haissinski solutions. This will be very useful in validating impedance computations or in searching for missing impedance sources of constructed impedance models of electron storage rings. The author D.Z. thanks many colleagues, including K. Bane, A. Blednykh, A. Chao, Y. Cai, L. Carver, K. Hirata, R. Lindberg, M. Migliorati, K. Ohmi, K. Oide, B. Podobedov, Y. Shobuda, V. Smaluk, and M. Tobiyama for inspiring discussions on various aspects of impedance issues in electron storage rings.
http://arxiv.org/abs/2307.02889v1
20230706094851
Learning to Solve Tasks with Exploring Prior Behaviours
[ "Ruiqi Zhu", "Siyuan Li", "Tianhong Dai", "Chongjie Zhang", "Oya Celiktutan" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.LG" ]
The Relationship Between Speech Features Changes When You Get Depressed: Feature Correlations for Improving Speed and Performance of Depression Detection Fuxiang Tao1 Wei Ma1 Xuri Ge1 Anna Esposito2 Alessandro Vinciarelli1 ^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania ^* Corresponding Author: [email protected] ========================================================================================================================================================= empty empty Demonstrations are widely used in Deep Reinforcement Learning (DRL) for facilitating solving tasks with sparse rewards. However, the tasks in real-world scenarios can often have varied initial conditions from the demonstration, which would require additional prior behaviours. For example, consider we are given the demonstration for the task of picking up an object from an open drawer, but the drawer is closed in the training. Without acquiring the prior behaviours of opening the drawer, the robot is unlikely to solve the task. To address this, in this paper we propose an Intrinsic Rewards Driven Example-based Control (IRDEC). Our method can endow agents with the ability to explore and acquire the required prior behaviours and then connect to the task-specific behaviours in the demonstration to solve sparse-reward tasks without requiring additional demonstration of the prior behaviours. The performance of our method outperforms other baselines on three navigation tasks and one robotic manipulation task with sparse rewards. Codes are available at <https://github.com/Ricky-Zhu/IRDEC>. § INTRODUCTION Deep reinforcement learning (DRL) has demonstrated impressive performance in sequential decision-making problems, such as video games <cit.>, robotics manipulation <cit.>, and autonomous driving <cit.>. However, for tasks with sparse rewards, the lack of learning signals can hamper the learning process. To address this, demonstrations are often leveraged to facilitate the learning process. In real-world applications, we can often encounter situations under which the initial conditions vary from the demonstration, termed as the task-specific behaviour demonstration in this paper, which therefore requires additional prior behaviours to complete the tasks. Prior works require collecting additional demonstration of prior behaviours, which have overlapped states with task-specific behaviour demonstration to overcome the problem <cit.>. However, as the initial conditions can be various, instead of collecting the demonstration of prior behaviours, the robots should be able to adapt to different initial conditions and leverage the available task-specific behaviour demonstration to learn the required prior behaviours. For example, consider we are given the demonstration for the task of picking up the object from a closed drawer, but in the training, there are obstacles in the way. Therefore, the robots should be able to explore and learn the essential prior behaviours of removing the obstacles first and then conduct the task-specific behaviours of picking up the object from a closed drawer to complete the task. In this paper, we aim to utilize task-specific behaviour demonstrations to facilitate the learning of tasks with varied initial conditions without requiring additional demonstration of the prior behaviours. The absence of the prior behaviour demonstration indicates that the agent needs to acquire the essential prior behaviours to reach the demonstrated states in the task-specific behaviour demonstration. For example, as illustrated in Fig. <ref>, the task-specific behaviour is placing the in-hand object into the tray. The agent is required to learn the prior behaviours of grasping and lifting the object from the table, and then mimic the demonstrated actions of the task-specific behaviours to complete the pick-and-place task. When children are shown the task-specific behaviour demonstration and then put in a different initial condition, they would intuitively explore the environment to return to the demonstrated states, which are termed as familiar states. Therefore, they can follow the demonstrated actions to complete the tasks. Inspired by that, we propose an intrinsic rewards module to encourage the agents to explore, while the exploration direction is biased towards the familiar states for acquiring the required prior behaviours. Following that, the agents follow the demonstrated actions to complete the tasks, which is achieved with an adaptive behaviour regularizer. The whole framework is trained in an end-to-end manner and can be implemented with off-policy actor-critic RL algorithms, such as soft actor-critic (SAC) <cit.> and deep deterministic policy gradient (DDPG) <cit.>. Our method was evaluated on challenging long-horizon sparse-reward navigation <cit.> and robotic manipulation tasks <cit.>. The empirical results show that the proposed method can enable the agent to leverage the task-specific behaviour demonstration to learn the essential prior behaviours to solve tasks with sparse rewards efficiently. The main contributions of this paper are: (i) We propose an Intrinsic Rewards Driven Example-based Control (IRDEC) learning framework, which enables the agents to acquire prior behaviours and connects to the task-specific behaviours adaptively given only the task-specific behaviour demonstration. (ii) We compare our method with several baselines on challenging navigation and robotic manipulation tasks with sparse rewards and our method achieves the best results. (iii) We carry out an ablation analysis to investigate the importance of each component in our method and the results show that both components, the intrinsic reward module and the example-guided exploration, are necessary for effectively help learn the required prior behaviours. § RELATED WORK Reinforcement Learning (RL) with Demonstration. Demonstrations are usually utilized in RL methods to facilitate learning in complex environments or mitigate the problem of sparse rewards. The utilization can often be categorized into: (1) imitation learning in which supervised learning is used to enforce the agent to mimic the demonstrated actions <cit.> or a reward function is inferred from the demonstration and the policy is trained to optimize it <cit.>; (2) behaviour extraction in which temporal abstracted behaviours are encoded from a large-scale offline dataset and policy is trained to optimize the extrinsic reward function by acting on the behaviour space to facilitate the exploration <cit.>; (3) the demonstration is used to initialize the RL actor or regularize the RL actor during the training <cit.>. However, these works require that the demonstration contains all the required behaviours for the target tasks. But in our proposed method, only the task-specific behaviour demonstration is provided, while the essential prior behaviours are acquired during the training by leveraging the familiar states. Goal-conditioned RL. Goal-conditioned RL has demonstrated competitive performance on tasks with sparse rewards. By augmenting the state with the goal sampled from the behavioural goal space, the sparse rewards are relabelled with the rewards computed via evaluating the Euclidean distance between the achieved goal and the sampled behaviour goal <cit.>. In our proposed method, the task-specific behaviour demonstration contains the goal states of the target tasks, which implicitly implies the goal information. However, goal-conditioned RL requires a mapping function to transform the state space into goal space. In contrast, our method does not require the mapping function and is therefore more applicable. Exploration with Intrinsic Rewards. In environments with sparse rewards, intrinsic rewards are usually leveraged to encourage the exploration of novel states before any extrinsic rewards are obtained. A large body of works focuses on using forward dynamic model prediction error as intrinsic rewards <cit.>. However, intrinsic rewards generated in this way can vanish with the agent being familiar with the environment, and therefore the prediction error tends to converge to zero <cit.>. In <cit.>, a count-based exploration bonus is used to incentivize the agent to explore. However, the measure of the counts in large continuous state spaces is non-trivial. Our method incorporates curiosity intrinsic rewards, and impact intrinsic rewards, which do not vanish as the training progresses to encourage the agent to aggressively expand its explored state space so that the familiar states can more easily be encountered. § BACKGROUND The learning problem is formulated as a Markov Decision Process (MDP) characterized by a tuple {𝒮, 𝒜, 𝒯, ℛ,ρ,γ} of state space, action space, transition probability mapping from current state s and action a to the next state s', reward function, initial state distribution, and discount factor. In each episode, the initial state s_0∈𝒮 is sampled from the initial state distribution s_0∼ρ(s_0), the agent chooses its action a_t∈𝒜 according to the policy a_t∼π(·|s_t), and then the environment will generate the next state s_t+1∼𝒯(·|s_t,a_t) and the reward r=ℛ(s_t,a_t,s_t+1). The objective of the agent is to maximize the sum of discounted rewards, 𝔼_τ∼π[∑_t=0^t=∞γ^tr_t], where trajectory τ is sampled from the policy π. Example-based Control. Unlike the goal-conditioned RL, which requires the mapping from the state space to the goal space, example-based control provides another way of reaching the goal over the future state distribution, defined as Eq. <ref> where s_t+ is the future state. Given examples of goal states, the actor policy is optimized to maximize the probability of reaching these states <cit.>. p^π(s_t +|s_t,a_t) ≜ (1-γ) ∑_Δ=0^∞γ^Δp^π(s_t+Δ=s_t +|s_t,a_t) The probability as shown in Eq. <ref> is defined over the future state distribution. p^π(e_t +|s_t,a_t) = 𝔼_p^π(s_t +|s_t,a_t)[p(e_t +|s_t +)] where e_t + is referred to as the event of reaching the example states conditioned on the future state from time step t. In our method, we leverage the classifier in <cit.> to estimate the probability of reaching the demonstrated states in the task-specific behaviour demonstration in the future. However, the exploration introduced by maximizing the classifier values could often encounter danger areas in tasks, which can result in failure to solve the tasks. Please refer to Section <ref> for further details. Off-policy Actor-critic RL. Our method can be implemented with off-policy actor-critic algorithms. In these algorithms, the critic learns an off-policy estimate of the value function for the current actor policy with the samples collected from behavioural actor policies <cit.>. The value function in our method estimates the weighted sum of the future return of the extrinsic and intrinsic rewards and the classifier values under the target actor policy. The actor policy in turn is optimized to maximize the estimates using the off-policy samples which are collected online. § INTRINSIC REWARDS DRIVEN EXAMPLE-BASED CONTROL FRAMEWORK The proposed method consists of two main components, namely, a curiosity-impact driven intrinsic reward module that encourages the agent to expand explored areas, and example-guided exploration provided by a classifier that predicts the probability of reaching familiar states in the future. To reach the familiar states and connect the task-specific behaviours, proper exploration over the state space is essential. The exploration direction introduced by directly maximizing the classifier values without the intrinsic reward module can potentially encounter danger areas. The exploration introduced by the intrinsic rewards module without the guidance provided by the classifier can cover less promising areas related to the target tasks, which can be inefficient and could hinder the agent from solving the tasks. By expanding the explored areas towards the familiar states with leveraging the guidance from the task-specific behaviour demonstration, our proposed method can learn the essential prior behaviours and connect them to the behaviours in the task-specific behaviour demonstration for solving the target tasks as illustrated in Fig. <ref>. §.§ Curiosity-Impact Driven Intrinsic Reward Module Our intrinsic reward module combines curiosity intrinsic rewards that encourage the agent to visit novel states <cit.> and impact intrinsic rewards that incentivize the agents to explore local states with large differences from current states, which can help to reach critical stages during the exploration <cit.>. We train a forward and an inverse dynamics models to learn the state representation ϕ_s(s) and the action representation ϕ_a(a). The forward dynamics model parameterized by θ_fw is used to predict the representation of the next state ϕ_s(s_t+1) given the current state representation ϕ_s(s_t) and action representation ϕ_a(a_t). The loss for the forward dynamics model is given in Eq. <ref>: L_fw(s_t,a_t,s_t+1) = 1/2‖ f_θ_fw(ϕ_s(s_t),ϕ_a(a_t)) - ϕ_s(s_t+1) ‖_2^2 The inverse dynamics model parameterized by θ_inv is used to predict the action representation ϕ_a(a_t) given the consecutive state representations ϕ_s(s_t) and ϕ_s(s_t+1). The loss for the inverse dynamics model is shown in Eq. <ref>. With the inverse dynamics model, the adverse impact on state representation learning, caused by the inherent noise of the environment where transition might not be affected by the agent's actions, i.e. noisy TV <cit.>, can be mitigated by retrieving the action leading to the transition. L_inv(s_t,a_t,s_t+1) = -log (g_θ_inv(ϕ_a(a_t)|ϕ_s(s_t),ϕ_s(s_t+1)) Thus, the total loss for the intrinsic module can be written as follows: L_icm = L_fw + L_inv The curiosity intrinsic reward is defined in Eq. <ref>. A larger curiosity intrinsic reward, namely a larger prediction error, indicates the novel states being visited as the forward dynamics model is unfamiliar with the transition. r_t^curiosity = ‖ f_θ_fw(ϕ_s(s_t),ϕ_a(a_t)) - ϕ_s(s_t+1) ‖_2 However, as the training progresses, the curiosity intrinsic rewards could vanish as the agent is being familiar with the environment. Thus, the impact intrinsic rewards defined as the squared Euclidean distance between consecutive state representations, as shown in Eq. <ref>, are utilized to encourage the agent to aggressively change its states to accelerate the exploration. r_t^impact = ‖ϕ_s(s_t+1)-ϕ_s(s_t) ‖_2/d_m^2, where d_m is the running average of the numerator for scaling the bonus. The impact intrinsic rewards will not vanish as the training progresses. Overall, the intrinsic reward can be written as: r_t^i = η r_t^curiosity+(1-η)r_t^impact, where η>0 is a scalar weighing the two types of intrinsic rewards. Thus, the overall reward r_t at step t is defined as the addition of intrinsic reward r_t^i and the sparse extrinsic reward r_t^e. A value head parameterized by φ is trained to approximate the action value based on the overall reward. We optimize the value head by minimizing the loss: L_Q = (r_t+γ𝔼_a_t+1∼πQ_φ^'^π(s_t+1,a_t+1)-Q_φ^π(s_t,a_t))^2, where Q_φ^'^π is the target network of the value head Q_φ^π for stabilizing the training <cit.>. §.§ Example-guided Exploration To learn the essential prior behaviours and connect them to the task-specific behaviours, visiting the overlapped state distribution of these behaviours are necessary <cit.>. Here we encourage the agent to visit the familiar states in the task-specific behaviour demonstration to construct such overlapped state distribution. To enable the agent to reach the familiar states, we utilize the classifier in <cit.> to discriminate between the state-action pairs which lead to reaching the familiar states in the future or not. The positive state-action pair is defined as the pair of familiar states sampled from task-specific behaviour demonstration 𝒟 and the action given by the current policy conditioned on the sampled familiar states. The negative state-action pairs are those sampled from the online buffer ℬ. Thus, the relation between the optimal classifier and p^π(e_t +|s_t,a_t) can be written as: C_ω(s_t,a_t)=p^π(s_t,a_t|e_t+=1)p(e_t+=1)/p^π(s_t,a_t|e_t+=1)p(e_t+=1)+p(s,a). Thus, the objective to optimize can be derived as: p^π(e_t+=1|s_t,a_t) = C_ω(s_t,a_t)/1-C_ω(s_t,a_t). The loss for the classifier is shown as Eq. <ref>. In the equation, 𝒞ℰ denotes the cross entropy loss. ℒ(θ)= (1-γ)𝔼_s∼𝒟,a∼π(·|s)𝒞ℰ(C_ω(s,a);y_pos) +(1+γ w)𝔼_(s,a,s')∼ B𝒞ℰ(C_ω(s,a);y_neg) The positive target y_pos is 1, while the negative target y_neg is computed as follows: y_neg=γ w(s')/1+γ w(s'), where the w(s) is computed as shown in Eq. <ref>. The detailed derivation can be referred to <cit.>. w(s) = C_ω^π(s,π(·|s))/1-C_ω^π(s,π(·|s)) §.§ Adaptive Behaviour Regularization During the training, the agent should maintain the capacity of executing task-specific behaviours after encountering the familiar states, i.e., after grasping the object the robot needs to know how to place the object into the tray by mimicing the demonstrated actions in the task-specific behaviour demonstration. Here, we utilized Eq. <ref> as the regularization loss to the update of the actor. L_reg(s^*,a^*) = -log(π_ψ(a^*|s^*)), where (s^*, a^*) are the state-action pairs sampled from the task-specific behaviour demonstration. However, the weighting of the regularization in the actor update should vary in different stages of the training. In the early stage of the training, the agent focuses on exploring and reaching the familiar states; therefore, the weighting should be relatively small to avoid the adverse impact on the exploration of familiar states. When the agent acquires the prior behaviours to reach the familiar states, the agent should focus more on exploiting the task-specific behaviour demonstration and mimicing the demonstrated actions, where the weighting should be relatively large. The similarity between the sampled online collected states and familiar states increases as the agent is more capable of reaching the familiar states. Thus, we leverage a kernel density estimator fitted with the familiar states in the task-specific behaviour demonstration to estimate the similarity and therefore adjust the regularization of loss weighting. However, as the range of estimation scores is unknown, the effective way to adjust the weighting is to compare the scores of consecutive sampled batches. The weighting is computed as: λ_reg = clip(λ_i-1 + m(b_i)-m(b_i-1)/max(m(b))× r, λ_min, λ_max) where the λ_i-1 is the weighting value at update step i-1, and m(b_i) is the density estimation score of the sampled batch of states at update step i. r is the pre-defined rate. max(m(b)) and λ_0 are the recorded maximum estimation score and the initial value of λ_reg. Meanwhile, we clip the λ_reg which is out of the pre-defined range [λ_min, λ_max]. Overall, the parameters of the actor policy network can be updated as follows: ψψ + λ∇_ψ𝔼_a_t∼π_ψ, s_t∼ℬ[(C(s_t,a_t)+Q(s_t,a_t))] +λ_reg∇_ψ𝔼_(s^*,a^*)∼𝒟L_reg(s^*,a^*). § EXPERIMENT We aim to answer the following questions through our experiments: (1) Can our proposed method effectively and efficiently leverage the task-specific behaviour demonstration to learn the essential prior behaviours and further develop a complete policy for solving the target tasks with sparse rewards? (2) How does our method compare to alternative methods? (3) What is the importance of each component of our method? §.§ Experiment setup We evaluate the proposed framework on three long-horizon navigation tasks and one robotic manipulation task as illustrated in Fig. <ref>. In our problem settings, we assume the access to the task-specific behaviour demonstration 𝒟 in the form of state-action trajectories τ_i={(s^*_0,a^*_0, ..., s^*_T_i, a^*_T_i)}. In our experiments, the task-specific behaviour demonstration for each task contains 100 trajectories collected with a sub-optimal policy. The trajectory lengths are around 150 and 25 for the navigation tasks and the robotic manipulation task respectively. Navigation Tasks. We evaluated our method on three simulated long-horizon navigation tasks with sparse rewards based on Mujoco <cit.>. In these tasks, the agent should learn to control the robot (point or ant) to navigate through the maze to reach the goal as illustrated in Fig. <ref> (a-c). For the point robot, the observation space 𝒪∈ℝ^6 consists of the positions and velocities of the mass centre, and action space 𝒜∈ℝ^2. For the ant robot, the observation space 𝒪∈ℝ^29 consists of the positions and velocities of its torsos, and action space 𝒜∈ℝ^8. The extrinsic reward is given as +1 when the robot reaches the goal and 0 otherwise. The task-specific behaviour demonstration for each task is illustrated as arrows in Fig. <ref> (a-c). The agent is supposed to learn the essential prior behaviours of navigating the robot to the familiar states and then conduct the behaviours of the task-specific behaviour demonstration to solve the tasks. Pick and Place Task. To evaluate our method on robotic manipulation tasks, we utilized the Pick and Place environment in <cit.>. The simulated environment consists of a 6-DoF Widow X robot in front of a tray. The robot is supposed to learn how to control its arm to grasp the object from the table and place it in the tray. The observation space 𝒪∈ℝ^17, includes the state of the end-effector and the gripper. The action space 𝒜∈ℝ^8, includes the Cartesian coordinate changes, orientation changes of the end-effectors, and the gripper open degree. In the task, the agent needs to learn how to grasp, lift the object, and place the object in the tray. The extrinsic reward is +1, when the objective is placed in the tray and otherwise 0. For this task, the task-specific behaviour demonstration consists of the trajectories of controlling the robot to place the in-hand object into the tray as illustrated in Fig. <ref> (d). The robot needs to learn the essential prior behaviours of grasping and lifting the object from the table and connect them to the task-specific behaviour to solve the task. Baselines. We compare our method with: (1) HER+BC <cit.>, a goal-conditioned method that utilizes a goal relabelling method to augment sufficient positive samples in goal-conditional tasks, while applying behaviour cloning loss to regularize the actor update with the task-specific behaviour demonstration. (2) HESS+BC <cit.>, a goal-conditioned hierarchical method which utilized the high-level policy to assign sub-goal for the low-level policy to facilitate exploration, which has demonstrated competitive performance in long-horizon tasks with sparse rewards. Meanwhile, behaviour cloning loss is applied to regularize the actor update with the task-specific behaviour demonstration. (3) SAC+BC <cit.>, which trains the agent with SAC while applying behaviour cloning loss to regularize the actor update with the task-specific behaviour demonstration. (4) GAIL <cit.>, an imitation learning method which attempts to match the state distribution between the training data and the demonstration. The hyperparameters used in these baseline methods were adopted from the original papers. §.§ Experimental results During the training, we evaluated the performance every 10K training steps with 10 episodes and recorded the average test returns. For each task, we trained our method and baselines with 5 different seeds and reported the results in Fig. <ref>. As shown in the figure, the proposed method outperforms all the baselines in all tasks. The outperformance is more significant in the Ant FourRooms task and the Pick and Place task as the exploration problems are harder. Without an effective incentive for expanding explored area, the sampled behavioural goals of HER+BC are insufficient with respect to diversity, which accounts for its poor performance in these long-horizon tasks <cit.>. By leveraging the hierarchical structure, HESS+BC assigns informative behavioural subgoals to encourage the low-level policy to visit promising areas for solving the tasks. However, it presents less efficiency compared to our proposed method in all the long-horizon navigation tasks and it fails in the Pick and Place task as the subgoal representation is not well learned in the robotic manipulation task. SAC+BC performs poorly in all tasks except for the Point Maze task as the action space and observation space are relatively low-dimensional so the entropy term in the actor update is sufficient for addressing the exploration problem. GAIL failed in all tasks because the policy cannot generate the training data that matches the demonstration. §.§ Ablation analysis To understand the role and importance of each component in our method, we conducted an ablation analysis on the curiosity-impact driven intrinsic reward module and the example-guided exploration separately. We compare IRDEC, IRDEC without the intrinsic rewards module, and IRDEC without the classifier. As shown in Fig. <ref>, our method without the intrinsic rewards module failed in all the tasks while our method without the classifier can solve the Point Maze and the Ant Maze tasks but failed in the hard Ant FourRooms and Pick and Place tasks. To understand the underlying causes, we visualize the explored areas for the tasks of the Ant Maze and the Ant FourRooms as shown in Fig. <ref>. We sampled 1K points from the online buffer to represent the explored areas after specific training steps. IRDEC without the intrinsic reward module failed both tasks, as the exploration attempts to pass through the wall directly towards the familiar states. And in robotic manipulation tasks, Pick and Place, the "wall" could be the unachievable robot configurations due to the singularity. For the Ant Maze task, IRDEC and IRDEC without the classifier can expand their explored areas towards the goal while IRDEC is slightly more efficient as the exploration direction is biased towards familiar states. For the Ant FourRooms task, only IRDEC succeed in reaching the goal. IRDEC without the classifier lacks guidance towards the goal and therefore the exploration directions are random. When the state spaces are large, the random exploration direactions could lead the failure in passing the critical points, such as the narrow doors connecting each room in Ant FourRooms. Moreover, As the training progresses, the curiosity intrinsic rewards part in the intrinsic reward module vanishes, which makes the exploration harder. The ablation analysis demonstrates that IRDEC can effectively leverage the familiar states in the task-specific behaviour demonstration to bias the exploration direction introduced by the intrinsic reward module for learning the required prior behaviours and connecting them to the task-specific behaviour to solve the tasks. § CONCLUSION In this paper, We present IRDEC, a method that incorporates an intrinsic reward module to proactively expand explored areas while biasing the exploration towards familiar states in the task-specific behaviour demonstration, for endowing the agent with the capabilities to adapt to initial conditions that are unseen from the demonstration. Our proposed method shows its capability of automatically learning the essential prior behaviours and connecting them to the behaviours in the task-specific behaviour demonstration for solving long-horizon tasks with sparse rewards. With our method, agents can adapt to tasks with varied initial conditions from the task-specific behaviour demonstration without requiring additional demonstration of the required prior behaviours. Additionally, the empirical results show the efficiency and viability of IRDEC compared to all the baselines. An exciting direction for future work would be applying the method to high-dimensional pixel-based control. IEEEtran
http://arxiv.org/abs/2307.00441v1
20230701234827
Spectral Evidence for Local-Moment Ferromagnetism in van der Waals Metals Fe$_3$GaTe$_2$ and Fe$_3$GeTe$_2$
[ "Han Wu", "Chaowei Hu", "Yaofeng Xie", "Bo Gyu Jang", "Jianwei Huang", "Yucheng Guo", "Shan Wu", "Cheng Hu", "Ziqin Yue", "Yue Shi", "Zheng Ren", "T. Yilmaz", "Elio Vescovo", "Chris Jozwiak", "Aaron Bostwick", "Eli Rotenberg", "Alexei Fedorov", "Jonathan Denlinger", "Christoph Klewe", "Padraic Shafer", "Donghui Lu", "Makoto Hashimoto", "Junichiro Kono", "Robert J. Birgeneau", "Xiaodong Xu", "Jian-Xin Zhu", "Pengcheng Dai", "Jiun-Haw Chu", "Ming Yi" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Theoretical Division and Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, NM, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of California at Berkeley, Berkeley, California 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA National Synchrotron Light Source II, Brookhaven National Lab, Upton, New York 11973, USA National Synchrotron Light Source II, Brookhaven National Lab, Upton, New York 11973, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Material Science and NanoEngineering, Rice University, Houston, Texas 77005, USA Department of Physics, University of California at Berkeley, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA Department of Materials Science and Engineering, University of California, Berkeley, USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA Theoretical Division and Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, NM, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of Washington, Seattle, Washington 98195, USA [email protected] Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Magnetism in two-dimensional (2D) materials has attracted considerable attention recently for both fundamental understanding of magnetism and their tunability towards device applications. The isostructural Fe_3GeTe_2 and  are two members of the Fe-based van der Waals (vdW) ferromagnet family, but exhibit very different Curie temperatures () of 210 K and 360 K, respectively. Here, by using angle-resolved photoemission spectroscopy and density functional theory, we systematically compare the electronic structures of the two compounds. Qualitative similarities in the Fermi surface can be found between the two compounds, with expanded hole pockets in Fe_3GaTe_2 suggesting additional hole carriers compared to Fe_3GeTe_2. Interestingly, we observe no band shift in  across its  of 360 K, compared to a small shift in  across its  of 210 K. The weak temperature-dependent evolution strongly deviates from the expectations of an itinerant Stoner mechanism. Our results suggest that itinerant electrons have minimal contributions to the enhancement of  in  compared to , and that the nature of ferromagnetism in these Fe-based vdW ferromagnets must be understood with considerations of the electron correlations. Spectral Evidence for Local-Moment Ferromagnetism in van der Waals Metals Fe_3GaTe_2 and Fe_3GeTe_2 Ming Yi August 1, 2023 =================================================================================================== The recently discovered van der Waals (vdW) family of ferromagnets exhibits Curie temperatures () ranging from 30 K to above room temperature <cit.>. The remarkable preservation of long-range ferromagnetic order in these materials in the 2D regime position them as a promising class of materials for the development of next-generation spintronic devices <cit.>. Equally importantly, these vdW materials offer a new platform to probe 2D magnetism. Our understanding of magnetism has been developed from two opposing limits. One approach is based on the weak-coupling picture where ferromagnetism arises from spontaneous spin splitting of the itinerant electronic bands near the Fermi level () onsetting at  <cit.>, which can only occur in metals. The other approach is based on a strong-coupling picture where electrons are localized and magnetism arises from the Heisenberg exchange coupling of the local moments, where the magnetic exchange splitting has no temperature dependence across , and is often associated with insulators <cit.>. In real materials, while the two limits exist, many compounds live in a regime where both mechanisms contribute. One such example is the iron-based superconductors (FeSCs), where electron correlations are moderate in between the strongly localized Mott physics of the cuprates and the itinerant spin-density-wave chromium metal. Neutron scattering identifies itinerant spin excitations at low energies with large fluctuating moments up to high energies <cit.>. Even contradictory reports of temperature-dependent exchange splitting have left a standing debate on the nature of magnetism in Fe and Ni metals <cit.>. The vdW ferromagnets can be largely grouped into two families, the insulating Cr-based compounds such as Cr_2Ge_2Te_6 <cit.> and CrI_3  <cit.>, and the metallic Fe-based compounds such as Fe_nXTe_2 (n = 3-5; X = Ge, Ga) (FGTs). The ferromagnetism in the insulating Cr-based compounds indeed can be understood by an anisotropic Heisenberg model where correlations between local moments persist to well above  <cit.>. As a result, the electronic structure only exhibits very subtle evolution across  <cit.>. The FGTs are quite different. Consisting of Te-sandwiched vdW slabs, the various members of this family differ structurally in the number of Fe sites within each slab as well as the number of slabs within a unit cell dictated by the stacking order <cit.>. Notably, the 's of the FGTs are close to or even above room temperature (Fig. <ref>a) <cit.>. As metals, the FGTs are often referred to as itinerant magnets. However, ample evidence suggest a coexistence of both local moments and itinerant electrons. Fe_3-xGeTe_2 with a  that varies between 140 K to 220 K <cit.>, in particular, has been demonstrated by neutron scattering to exhibit a dual nature of magnetic excitations <cit.>. Angle-resolved photoemission spectroscopy (ARPES) measurements indicate a deviation from Stone-type spin splitting across  as well as spectral weight transfer suggestive of Kondo behavior <cit.>. Very recently, Fe_3GaTe_2, isostructural to Fe_3GeTe_2, has been synthesized and shown to exhibit a remarkable above-room temperature  of 350 K, along with a high saturation magnetic moment, significant perpendicular magnetic anisotropy energy density, and a large anomalous Hall angle at room temperature <cit.>. These findings highlight the potential of Fe_3GaTe_2 as an exciting material for applications. The identical crystal structure yet drastically different 's in these two compounds offer an opportunity to probe into the nature of the magnetism in these materials. Here, via systematic ARPES measurements and density functional theory calculations, we compare and contrast the electronic structure of  and . We find  to be an effectively hole-doped version of . In a large energy range of the valence bands, we identify a separation of the spectral weight that seems to be consistent with the predicted Fe spin up and spin down states. However, we find no observable shift in the electronic structures of  across its , compared to a subtle shift for . Taken all together, the origin of magnetism in both Fe_3GaTe_2 and Fe_3GeTe_2 deviate strongly from the expectations of the itinerant Stoner model, with  exhibiting an even stronger local moment behavior. Our results indicate that the local moments are crucial for explaining the nature of ferromagnetism in FGTs, and are likely responsible for the much enhanced  in . High-quality  and  single crystals were synthesized by a chemical transport method <cit.>. ARPES measurements were carried out at beamline 5-2 of the Stanford Synchrotron Radiation Lightsource, ESM (21ID-I) beamline of the National Synchrotron Light Source II, and beamlines 7.0.2.1, 10.0.1 and 4.0.3 of the Advanced Light Source, using a DA30, DA30, R4000, and R8000 electron analyzer, respectively. The overall energy and angular resolutions were 15 meV and 0.1°, respectively. All data shown in the main text were taken with 132 eV photons, with additional photon energy-dependence data shown in the SM. All data were taken at 15 K unless otherwise noted. The DFT calculations were carried out by using WIEN2k package which uses the full-potential augmented plane wave plus local orbital as the basis <cit.>. Perdew-Burke-Ernzefhof (PBE) generalized gradient approximation (GGA) was employed for the exchange-correlation functional <cit.> and a 16 x 16 x 3 k-point mesh for self-consistent calculation. As depicted in Fig. 1a, Fe_3XTe_2 has a layered hexagonal crystal structure in the space group P63/mmc (No. 194) <cit.>. The lattice parameter for Fe_3GaTe_2 (a = 4.07 Å, c = 16.1 Å) and Fe_3GeTe_2 (a = 3.99 Å, c = 16.3 Å) are similar, as previously reported <cit.>. The unit cell consists of two vdW slabs, each with two nonequivalent Fe sites, Fe I and Fe II, and has the symmetry operators C_3z, C_2y and P (inversion) that enforce the emergence of topological nodal lines in the presence of ferromagnetism, giving rise to a tunable intrinsic anomalous Hall current <cit.>. Our DFT calculations for the FM phase of both compounds are shown in Fig. <ref>e, where the topological crossings at the K point can be seen. The zero-field-cooled (ZFC) and field-cooled (FC) magnetization measurements for the two compounds show a  of 210 K for Fe_3GeTe_2 and 360 K for Fe_3GaTe_2, in agreement with previous reports <cit.>. We also observe clear x-ray magnetic circular dichroism signal at the Fe L-edge for both compounds (see SM). From a comparison of our DFT calculations for the ferromagnetic ground state,  is an effective hole-doped version of , as the band structure is qualitatively similar except a shifting down of the chemical potential in  (Fig. <ref>d). Next, we present the electronic structure of the ferromagnetic phase of Fe_3GaTe_2 and Fe_3GeTe_2 as measured by ARPES. As expected, the core level spectra for the two compounds are very similar except the distinct Ge and Ga 3d peaks. The electronic structure of the two compounds near  are compared in Fig. <ref>, as measured by both linear vertical (LV) and linear horizontal (LH) polarized light. Consistent with previous reports on  <cit.>, we observe two hole Fermi pockets centered at the Γ point: an inner circular pocket and an outer hexagonal pocket (Fig. <ref>d). They are formed by two dispersive bands as observed on the high symmetry cut, and we label them as the α and β bands (Fig. <ref>c). Additionally, a small electron pocket is observed at the K point, which we label as the ω band. From the high symmetry cut, we also observe two other dispersions that do not cross , which we label the ξ and γ bands. For , we also observe two hole Fermi pockets at the Γ point (Fig. <ref>a), both with similar shapes but expanded areas as compared to those in , indicating additional hole charge carriers in  compared to . From the high symmetry cut (Fig. <ref>b), both the inner α band and the outer β band appear to cross  at larger Fermi momenta. The γ band is also observed to shift up in energy compared to that in . When we compare the near- measured dispersions with those by DFT calculations, we find that a renormalization factor of 1.6 can achieve a reasonable agreement for both compounds (Fig. <ref>b,e), including the locations of the hole band tops at Γ. The renormalization factor is consistent with that determined for  previously <cit.>, and is slightly larger than that for Fe metal <cit.>. Having examined the electronic structure in the near  region, we next present the spectra in the large energy range covering the entire valence bands. Figure <ref>a-b shows the spectra within 6 eV below  along the M-K-Γ high symmetry direction for both compounds. Visibly, the spectral is separated into sharp dispersions within 1 eV of  and broad spectral intensity in the -2 to -3 eV energy range. This is clearly shown in the stack of energy distribution curves (EDCs) in Fig. <ref>c-d. The broad hump (green markers) largely follows the dispersion and photoemission matrix elements of the sharper bands near , and a clear dip separates the two regimes of sharp quasiparticles and broad spectral weight. Even in this large energy window, it is evident that the overall spectral shape of  is shifted up in energy compared to that of , consistent with the overall hole-doping. To understand the origin of these states, we look at the DFT calculated density of states (DOS) in the ferromagnetic state. Clearly, Fe 3d states dominate the valence bands, with a small contribution by Te and Ge/Ga. To take into consideration the renormalization of the Fe 3d states derived above, we renormalize the Fe partial DOS by 1.6 while leaving the Te and Ge/Ga partial DOS unrenormalized. This results in the spin majority (up) states having a peak near -2 eV and the spin minority (down) states near . This comparison suggests that the sharper quasiparticles and broad hump dichotomy is likely dominated by the spin minority and spin majority states, respectively. Such kind of quasiparticle-dip-broad hump spectral feature has also been reported in other correlated ferromagnets, such as SrRuO_3, which is a metallic ferromagnet where both itinerant electrons and local moments contribute to the magnetism. In that case, this quasiparticle-dip-broad hump spectral feature was also explicitly reported, where strong scattering results in the incoherence of the spin majority states <cit.>. In Fe and Ni metals, LDA+Dynamical Mean Field Theory has also captured such spectral lineshape in the single-particle spectral function by including the many-body effects of the 3d states <cit.>. To examine the role of the near- bands in the ferromagnetism, we carried out temperature dependence study of the electronic structures for both Fe_3GaTe_2 and Fe_3GeTe_2 across their respective . Figure <ref>b displays the band dispersion of Fe_3GaTe_2 along the M-K-Γ direction at selected temperatures 50 K and 410 K, with additional intermediate temperatures shown in the SM. From this dataset, we can extract the temperature evolution of the bands by extracting the EDCs at specific momenta indicated by the orange and green lines in Fig. <ref>(b) (Fig. <ref>e,f). There is no noticeable shift in the peak position of the EDCs for both α and ζ bands with increasing temperature through the  of 360 K, as shown also from the fitted peak positions in Fig. <ref>(g). Similar measurement and analysis was also carried out for  ( = 210 K). Figure <ref>a shows the band dispersions at 50 K and 250 K, with additional intermediate temperatures shown in the SM. While the β band is observed to also not shift, a small shift of the ξ band is observed across , consistent with previous report <cit.>. The lack of observable shift of bands across  strongly deviates from the expected behavior of itinerant ferromagnets. According to the Stoner model, the exchange splitting is expected to disappear above . For Fe_3GeTe_2 and Fe_3GaTe_2, we can estimate the exchange splitting sizes from DFT calculations to approximately be 1.5 eV and 1.7 eV, respectively. The expected shift of majority/minority bands would be half of the exchange splitting divided by the band renormalization factors, resulting in 0.5 eV and 0.57 eV, respectively. The temperature evolution of the band shift according to the Stoner model can then be estimated by scaling this energy scale to the existing temperature-dependent bulk magnetization in  <cit.> or magnetic moment measured by neutron diffraction for  <cit.>, which are plotted as the grey dashed lines in Fig. <ref>g. We note that for systems whose ferromagnetism is contributed by both itinerant electrons and local moments, partial closing of the exchange splitting is observed across , such as Fe metal <cit.> and SrRuO_3 <cit.>. The stark contrast between the expected shift and the observed band shift here suggests that itinerant electrons play a minimal role in the ferromagnetism in Fe_3GeTe_2 and Fe_3GaTe_2. While some finite shift is still observed in , we observe no shift in , suggesting that local moments play an even more dominant role in . Taking all the presented evidence together, we come to an understanding of the ferromagnetism of the isostructural Fe_3GaTe_2 (∼ 360 K) and Fe_3GeTe_2 (∼ 210 K) as the following: while both systems are metallic and exhibit clear Fermi surfaces, the valence band spectral intensity exhibit a quasiparticle-dip-broad hump feature that seem to indicate non-negligible correlation effects. In addition, the itinerant charge carriers near  show minimal modifications across , with  showing even less observable changes. This indicates that the large enhancement of  in  cannot be due to the change in the itinerant charge carriers and must be a result of the local moments. Our findings therefore demonstrates that the  and  systems are moderately correlated and a comprehensive understanding of the magnetism in these Fe-based vdW ferromagnets must take into consideration the many-body interactions of the Fe 3d states. This research used resources of the Advanced Light Source, and the Stanford Synchrotron Radiation Lightsource, both U.S. Department Of Energy (DOE) Office of Science User Facilities under contract nos. DE-AC02-05CH11231 and AC02-76SF00515, respectively. ARPES work is supported by the U.S. DOE grant No. DE-SC0021421, the Gordon and Betty Moore Foundation’s EPiQS Initiative through grant no. GBMF9470. YCG is supported by the Robert A. Welch Foundation, Grant No. C-2175 (M.Y.). Work at University of California, Berkeley, is funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (Quantum Materials program KC2202). Work at Los Alamos was carried out under the auspices of the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) under Contract No. 89233218CNA000001, and was supported by LANL LDRD Program and in part by Center for Integrated Nanotechnologies, a DOE BES user facility, in partnership with the LANL Institutional Computing Program for computational resources.
http://arxiv.org/abs/2307.02305v2
20230705140550
Bootstrapping the $a$-anomaly in $4d$ QFTs: Episode II
[ "Jan K. Marucha" ]
hep-th
[ "hep-th" ]
Maximum edge colouring problem on graphs that exclude a fixed minorSupported by project 22-17398S (Flows and cycles in graphs on surfaces) of Czech Science Foundation Zdeněk Dvořák1 Abhiruk Lahiri2 Received March 10, 2023; accepted May 12, 2023 ====================================================================================================================================================================== § INTRODUCTION AND SUMMARY The a-anomaly coefficient of 4d CFTs is of great interest to physicists. Not only it can be thought as, analogous to the 2d central charge c, as a way of measuring the number of degrees of freedom, but also, exactly like c for 2d, it is strictly monotonic along RG flow, with a_UV > a_IR. The a-anomaly coefficient can be computed in simple cases like free theories<cit.> and weakly coupled theories <cit.>. Nevertheless we are far from knowing the full set of allowed values of the a-anomaly in 4d QFTs. As shown by Komargodski and Schwimmer <cit.>, the a-anomaly of the UV CFT may be probed by coupling a dilaton test particle to the quantum field theory of interest. The dilaton-dilaton scattering amplitude is related to the a-anomaly coefficient, 𝒯_BB→ BB = a/f^4(s^2+t^2+u^2) + … with a = a_ - a_, or, in case of gapped theories, a = a_. In the limit of probe coupling f→∞, the original theory is unperturbed. Here we denoted with B the dialaton and s,t,u are Mandelstam invariants, defined by the (all-ingoing) momenta in the process, s = (p_1+p_2)^2 t = (p_1+p_3)^2 u = (p_1+p_4)^2 . The space of consistent scattering amplitudes may be investigated from the point of view of the S-matrix bootstrap. For each amplitude, a general ansatz with appropriate pole and branch cut structure is constructed, and then, by finding the range of parameters consistent with unitarity bounds, bounds on quantities of interest (in this case, the a-anomaly), can be found. These computations were able to constrain the parameter range in large number of 4d QFT, ranging from ϕ^4 scalar theories <cit.><cit.><cit.>, pion scattering in QCDs <cit.><cit.>, or massless vector particles <cit.> This paper QFT with one or more stable scalar particles, coupled to the dilaton field. The details about setting up the amplitude ansatz with appropriate analytic structure and obeying adequate soft conditions, are given in the next section, along with the unitarity conditions. This is followed by a section discussing the numerical setup, translating the S-matrix bootstrap task into a semidefinite matrix problem (SDP). Previously, similar computations <cit.> reproduced the value of a = ( 5760π^2 )^-1 found by analytic CFT computations <cit.> for the theory of free scalar, and found a lower bound on a for a theory with a single scalar particle and with a ℤ_2 global symmetry. This follow-up paper generalizes the problem and investigates theories of * single stable scalar particle (no ℤ_2) * two stable scalar particles (with varying mass ratio) * many stable scalar particles The detailed results of these computations are discussed in detail in section <ref>, and the corresponding bounds on the a-anomaly are presented in a table below. Theory a-anomaly Annotations Single free scalar = 1/5760π^2 Derived analyticly in <cit.>, confirm to saturate unitarity bounds <cit.>. Single stable ℤ_2-odd scalar ≳ 0.32 · Investigated in <cit.>. Single stable scalar ≳ 0.15 · No resonances at s < 4m_A^2, with m_A the mass of the lightest particle. Two stable scalars ≳ 0.034 · The minimal a-anomaly occurs for m_X^2 ≈(2.5±0.1)m_A^2. Many stable scalars ≳ 0.036 · The data comes only from preliminary investigations, which explains why the extrapolated bound is slightly inconsistent with the previous case. § S-MATRIX BOOTSTRAP SETUP To probe a quantum field theory, one introduces the dilaton, a massless scalar particle basically coupled to the trace of the energy-momentum tensor <cit.>. The S-matrix bootstrap setup containing the lightest stable scalar particle of the theory, called A (and of mass m_A, in practical computations normalized to 1), and dilaton B (of mass m_B = 0) contains the following amplitudes: AA→ AA, AA→ AB, AA→ BB, AB→ BB, BB→ BB, AB→ AB. Note that AB → AB is s-t crossing of AA→ BB, and the other amplitudes are fully crossing symmetric. Similarly to case of ℤ_2-symmetric case described in <cit.>, the nontrivial part of the scattering amplitudes depends on the probe coupling, with 𝒯_AA→ AA = 𝒯̃_AA→ AA + O(f^-1) 𝒯_AA→ AB = 1/f𝒯̃_AA→ AB + O(f^-2) 𝒯_AA→ BB = 1/f^2𝒯̃_AA→ BB + O(f^-3) 𝒯_AB→ BB = 1/f^3𝒯̃_AB→ BB + O(f^-4) 𝒯_BB→ BB = 1/f^4𝒯̃_BB→ BB + O(f^-5) 𝒯_AB→ AB = 1/f^2𝒯̃_AA→ BB + O(f^-3) The amplitude 𝒯̃_BB→ BB = a_(s^2+t^2+u^2), via <cit.>, relates the a-anomaly of UV CFT to dilaton scattering and is the central point of interest of the numerical experiments described in this paper. §.§ Unitarity To describe the unitarity conditions, it's most feasible to decompose amplitudes (functions 𝒯(s,t) ) into partial waves 𝒯^ℓ(s). The details of such decomposition is described in great extent in <cit.>. For each value of s above the physical threshold, the unitarity of 2→ 2 particle scattering can be described using the condition [ 1 0 0 𝒯^* ℓ_AA→ AA 𝒯^* ℓ_AA→ AB 𝒯^* ℓ_AA→ BB; 0 1 0 𝒯^* ℓ_AB→ AA 𝒯^* ℓ_AB→ AB 𝒯^* ℓ_AB→ BB; 0 0 1 𝒯^* ℓ_BB→ AA 𝒯^* ℓ_BB→ AB 𝒯^* ℓ_BB→ BB; 𝒯^ℓ_AA→ AA 𝒯^ℓ_AB→ AA 𝒯^ℓ_BB→ AA 2 𝒯^ℓ_AA→ AA 2 𝒯^ℓ_AA→ AB 2 𝒯^ℓ_AA→ BB; 𝒯^ℓ_AA→ AB 𝒯^ℓ_AB→ AB 𝒯^ℓ_BB→ AB 2 𝒯^ℓ_AA→ AB 2 𝒯^ℓ_AB→ AB 2 𝒯^ℓ_AB→ BB; 𝒯^ℓ_AA→ BB 𝒯^ℓ_AB→ BB 𝒯^ℓ_BB→ BB 2 𝒯^ℓ_AA→ BB 2 𝒯^ℓ_AB→ BB 2 𝒯^ℓ_BB→ BB ]≽ 0 In f→ 0 limit this condition simplifies to [ 1 𝒯̃^* ℓ_AA→ AA 𝒯̃^* ℓ_AA→ AB 𝒯̃^* ℓ_AA→ BB; 𝒯̃^ℓ_AA→ AA 2𝒯̃^ℓ_AA→ AA 2𝒯̃^ℓ_AA→ AB 2𝒯̃^ℓ_AA→ BB; 𝒯̃^ℓ_AA→ AB 2𝒯̃^ℓ_AA→ AB 2𝒯̃^ℓ_AB→ AB 2𝒯̃^ℓ_AB→ BB; 𝒯̃^ℓ_AA→ BB 2𝒯̃^ℓ_AA→ BB 2𝒯̃^ℓ_AB→ BB 2𝒯̃^ℓ_BB→ BB ]≽ 0 Introducing this full matrix (4×4 with 5 independent amplitudes) is computationally very expensive. Instead, via Sylvester's criterion, two necessary conditions are investigated numerically: [ 1 𝒯̃^* ℓ_AA→ AA 𝒯̃^* ℓ_AA→ BB; 𝒯̃^ℓ_AA→ AA 2𝒯̃^ℓ_AA→ AA 2𝒯̃^ℓ_AA→ BB; 𝒯̃^ℓ_AA→ BB 2𝒯̃^ℓ_AA→ BB 2𝒯̃^ℓ_BB→ BB ]≽ 0, [ 2𝒯̃^ℓ_AB→ AB ]≽ 0, Note that these two conditions are sufficient if processes AA→ AB and AB→ BB are forbidden by a ℤ_2 symmetry of theory as assumed in <cit.>. Looking at this subset of unitarity conditions reduces problem to three independent amplitudes: AA→ AA, AA→ BB, BB→ BB . §.§ Analicity and crossing The `physical threshold' mentioned previously requires clarification. The usual formula s > ( max(∑_i m_i,in, ∑_i m_i,out, ) )^2 would suggest s>0. However, in the probe limit f →∞, the discontinuities only start as a result of QFT intermediate states, as the dilaton is decoupled from the theory. This implies each amplitude 𝒯̃ has a branch cut beginning at s = 4m_A^2. This provides a natural choice for parametrizing finite part of each amplitude 𝒯̃_AA→ AA(s,t,u) = ∑_a,b,cα_abc[a]s4/3[b]t4/3[c]u4/3 + … 𝒯̃_AA→ BB(s,t,u) = ∑_a,b,cβ_abc[a]s0[b]t1[c]u1 + … 𝒯̃_BB→ BB(s,t,u) = ∑_a,b,cγ_abc[a]s0[b]t0[c]u0 + … with ss_0 = √(4m_A^2 - s_0) - √(4m_A^2 - s)/√(4m_A^2 - s_0) + √(4m_A^2 - s) Indeed, the function ss_0 has the appropriate branch cut starting at s = 4m_A^2. The unitarity conditions (<ref>) are to be imposed above this cut, at s∈ (4m_A^2; ∞). The choice of second parameter of ρ function (the origin of rho series) is to keep crossing relations and soft conditions as simple as possible, which will be justified in next sections. With amplitudes AA→ AA and BB→ BB being fully crossing symmetric, the crossing conditions imply α_abc = α_bac = α_cba and γ_abc = γ_bac = γ_cba, and for amplitude AA→ BB, which is symmetric in tu variables, the condition is β_abc = β_acb, and the amplitude AB→ AB is related by 𝒯̃_AB→ AB(s,t,u) = 𝒯̃_AA→ BB(t,s,u) The Mandelstam variables s,t,u are not independent, with s+t+u = ∑ m_i^2, and m_i's being masses of incoming and outgoing particles. To reduce the redundancy in parametrization of 𝒯̃'s, one imposes α_abc = 0 if a≠0 b≠0 c≠0 and similar for other amplitudes. §.§ Pole structure and soft conditions Having a QFT with stable particles of masses m_A, m_B = 0 and possibly some other matter content of mass m_X leads to existence of poles in scattering amplitudes. Keeping in mind crossing symmetries, the poles due to exchange of particle A in aforementioned amplitudes are 𝒯̃_AA→ AA = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–(a)–(b)–(o1),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o1)–(a)–(b)–(i2),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o2)–(a)–(b)–(i2),(o1); + … = = -|g_0|^2 (1/s-m^2_A+1/t-m^2_A+1/u-m^2_A) + … 𝒯̃_AA→ BB = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–(a)–(b)–[dashed](o1),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_1; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_1; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1)–(a)–(b)–(i2), (o1)–[dashed](a), (o2)–[dashed](b), ; + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_1; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_1; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1)–(a)–(b)–(i2), (o2)–[dashed](a), (o1)–[dashed](b), ; + … = = -g_0 g_2/s-m^2_A - |g_1|^2(1/t-m^2_A+1/u-m^2_A) + … 𝒯̃_BB→ BB = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–[dashed](a)–(b)–[dashed](o1),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o1)–[dashed](a)–(b)–[dashed](i2),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o2)–[dashed](a)–(b)–[dashed](i2),(o1); + … = -|g_2|^2 (1/s-m^2_A+1/t-m^2_A+1/u-m^2_A) + … where solid line is propagation of A and dashed line is propagation of B. If one introduces an additional particle m_X, other poles are introduced due to its exchange: 𝒯̃_AA→ AA = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0'; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0'; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–(a)–[double](b)–(o1),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0'; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0'; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o1)–(a)–[double](b)–(i2),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0'; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_0'; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o2)–(a)–[double](b)–(i2),(o1); … = = -|g'_0|^2 (1/s-m^2_X+1/t-m^2_X+1/u-m^2_X) + … 𝒯̃_AA→ BB = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_0'; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2'; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–(a)–[double](b)–[dashed](o1),(o2); + … = -g'_0 g'_2/s-m^2_X + … 𝒯̃_BB→ BB = [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2'; [right =of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2'; [above left=of a](i1); [above right=of b] (o1) ; [below left=of a] (i2); [below right=of b] (o2); * (i1), (i2)–[dashed](a)–[double](b)–[dashed](o1),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2'; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2'; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o1)–[dashed](a)–[double](b)–[dashed](i2),(o2); + [baseline=((a.base)!0.5!(b.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_2'; [below=of a, blob, centered, pattern=none, node font=, inner sep = .3mm] (b) g_2'; [above left=of a](i1); [above right=of a] (o1) ; [below left=of b] (i2); [below right=of b] (o2); * (i1),(o2)–[dashed](a)–[double](b)–[dashed](i2),(o1); + … = = -|g_2'|^2 (1/s-m^2_X+1/t-m^2_X+1/u-m^2_X) + … The vertex factor [baseline=((a.base))] [small] [blob, centered, pattern=none, node font=, inner sep = .3mm] (a) g_1'; [left=of a](i1); [above right=of a](i2); [below right=of a](i3); * (i1) – (a); (i2) –[double] (a); (i3) –[dashed] (a) ; = 0 This is a result of construction of dilaton effective action. As detailed in <cit.>, the dilaton couples to relevant operators (usually masses) in perturbed CFT action. With only diagonalized mass terms (like m_A^2 ϕ_A^2 and m_X^2 ϕ_X^2) no effective term including dilaton and two different types of matter may arise. Same construction also fixes some residues of 𝒯̃_AA→ BB, as well as the finite part of amplitude in their vicinity. The construction, described in more details in <cit.> gives 𝒯̃_AA→ BB = - 2 (m_A^4/t-m_A^2 + m_A^4/u-m_A^2) - m_A^2 + O(s,t-m_A^2, u-m_A^2) around the point s=0,t=m_A^2,u=m_A^2. The final soft condition, given by (<ref>), also defines a goal of bootstrap experiment. 𝒯̃_BB→ BB = a(s^2+t^2+u^2) + … The unitarity conditions on 𝒯̃_BB→ BB will provide a non-zero bound on value of a-anomaly in theories containing spin-0 particles. The next section provides details about writing these conditions in the form of a semidefinite problem, which will then be investigated numerically. § NUMERICAL IMPLEMENTATION In practical computations, the amplitude is parametrized as linear combination of `building blocks': 𝒯(s,t,u) = ∑𝚌𝚘𝚎𝚏𝚏_i · f_i(s,t,u) Constructing an ansatz with analytic properties described in previous section, and imposing unitarity conditions on a set of matrices (<ref>) derived from these amplitudes[The matrices are therefore linear functions of 𝚌𝚘𝚎𝚏𝚏_i as well.] allows to form the question about a-anomaly bounds as a semidefinite matrix problem (SDP), which allows using specialized solvers. Therefore, using SDPB<cit.> to find a vector 𝚌𝚘𝚎𝚏𝚏_i that minimizes a-anomaly given by (<ref>), and describes amplitudes that obey unitarity bounds (<ref>) can be done. However, besides including ρ series, described by (<ref>) (already a linear combination), the ansatz need to contain poles, and follow soft conditions described in previous section. §.§ Poles As mentioned previously, the terms in amplitude resulting from exchange of (<ref>) are non-linear functions of coupling constants g_0, g_1, g_2. To be able to put the problem into semidefinite linear form, one shall define `independent' linear coefficients g_0^2 g_1^2 g_0 g_2 g_2^2 For this parametrization to be equivalent, these parameters have to obey equation · = · which may be written as a condition on matrix determinant = 0 As ≥ 0 and ≥ 0, the semidefinitness of such matrix [ ; ]≽ 0 does impose the inequality ·≥· without any additional unwanted conditions. As the term in amplitude 𝒯̃_BB→ BB related to parameter , to be precise 𝒯̃_BB→ BB⊃ -·(1/s-1+1/t-1+1/u-1) , is purely real, and the unitarity conditions (<ref>) contain only 𝒯̃_BB→ BB, there is no additional bound on than imposed by (<ref>). In later section it will be shown that to minimize a-anomaly is also to be minimized, and without any further constraints, that will lead to (<ref>) being saturated, and (<ref>) holding true. §.§ Soft conditions To impose soft conditions (<ref>) around points s=0, t=0, u=0, first, one fixes 𝚐𝚋 = 2 to match the residue, and then changes ansatz terms related to 𝚐𝚋𝚋. With t1 = 𝒪(t-1), and u1 = 𝒪(u-1) this is particularly simple: 𝒯̃_AA→ BB = - 2(1/t-1 + 1/u-1) -(1 + 1/s-1)+ +∑_a,b,cβ_abcs0^at1^bu1^c That guarantees the finite part of amplitude to be independent of value of 𝚐𝚋𝚋, therefore, all needed to impose correct soft condition is setting β_000=-1. §.§ The goal: a-anomaly To relate a-anomaly to coefficients of ansatz one needs to compare the relation 𝒯̃_BB→ BB = a(s^2+t^2+u^2) + … to terms of the ansatz. Expanding the ansatz 𝒯̃_BB→ BB = -𝚐𝚍𝚒𝚕𝚊(1/s-1+1/t-1+1/u-1)+ +∑_a,b,cγ_abc s0^at0^bu0^c around s=t=u=0 gives 𝒯̃_BB→ BB = (const.) + (𝚐𝚍𝚒𝚕𝚊 + γ_001/128 + γ_002/256 - γ_011/512)(s^2+t^2+u^2) + … The constant term is purely real, so, in practice, as the unitarity matrices contain only 𝒯̃_BB→ BB, doesn't have to be explicitly fixed to 0. §.§ Additional particles The procedure of including another particles, as described by (<ref>), follows very similarly. Again, one has to introduce `independent' linear coefficients g_0^2 g_0 g_2 g_2^2 with semidefinite condition [ ; ]≽ 0 . To keep the soft conditions on 𝒯̃_AA→ BB intact, the amplitude term related to is 𝒯̃_AA→ BB⊃ -(1/m_X^2 + 1/s-m_X^2) which again, has no contribution to finite part of amplitude around s=0, t=u=1. The contribution to a-anomaly from terms described in (<ref>), is derived as before, giving additional contribution of 𝒯̃_BB→ BB = … + 𝚐𝚍𝚒𝚕𝚊𝚇/m_X^6(s^2+t^2+u^2) + … Such construction can be repeated to include arbitrary many resonances resulting from exchange of particles of masses 1<m_X<2. The condition (<ref>) will again always be saturated when minimizing a-anomaly. §.§ Improvement terms Although, any amplitude of analytical properties described before can be reproduced by a linear combination of resonances, and (infinite) ρ series mentioned before, in practical computations one needs to limit the experiment to finite number of terms, usually with α_abc≠ 0 only for some a+b+c ≤. Given these limitations, it is often beneficial to include `redundant' terms in the ansatz to improve convergence in such case. The `threshold singularity term', as introduced in <cit.> corresponds to bound state of two A particles, and is included in AA→ AA amplitude ansatz as 𝒯̃_AA→ AA⊃ξ·( 1/s4/3-1 + 1/t4/3-1 + 1/u4/3-1) with analytic bound on the related coefficient ξ∈[ξ_min, 0], with ξ_min = - 32 √(6)π. The early trials (not plotted in this paper) showed better convergence with such addition, and its behavior in experimental data is discussed in the next section. Along with improvement terms in amplitude AA → AA, as described in greater detail in <cit.>, dilaton-to-dilaton scattering scattering amplitude can be expanded by including term proportional to 𝒯̃_BB→ BB^free, corresponding amplitude in theory of free massive boson (so derived from 𝒯̃_AA→ AA = 0). Details of such derivation were described extensively in <cit.>, and, quoting the relevant part, [𝒯_BB→ BB^free(s,t)] = 1/32π√(1-4/s) -1/4π sln(1+√(1-4/s)/1-√(1-4/s)) -1/4π 1/su1/√(1+4t/su)ln(1/s-u/st+u/2t[1+√(1-4/s)√(1+4t/us)]/1/s-u/st+u/2t[1-√(1-4/s)√(1+4t/us)]) -1/4π 1/st1/√(1+4u/st)ln(1/s-t/su+t/2u[1+√(1-4/s)√(1+4u/ts)]/1/s-t/su+t/2u[1-√(1-4/s)√(1+4u/ts)]) and 𝒯̃^free_BB→ BB = (s^2+t^2+u^2) + … with = 1/5760π^2. Expanding ansatz with a term 𝒯̃_BB→ BB⊃𝚏𝚛𝚎𝚎𝙰𝚖𝚙·𝒯̃_BB→ BB/ along with other contribution gives a goal for the optimization problem a = 𝚏𝚛𝚎𝚎𝙰𝚖𝚙 + 𝚐𝚍𝚒𝚕𝚊 + 𝚐𝚍𝚒𝚕𝚊𝚇/m_X^6_or sum of such terms for multiple resonances+ γ_001/128 + γ_002/256 - γ_011/512 §.§ The grid, the limitations, the (CPU) time. The unitarity condition (<ref>) has to be imposed on every s ∈(4, ∞). In practice, SDP computations are limited to finite number of samples in s. The grid of values of s has to be chosen carefully to ensure convergence. As the most variation in 𝒯's occur around physical threshold s ≳ 4[This statement is intentionally broad, and comes from trials and errors. If an answer to SDP was computed on too small grid, the unitarity violations between grid points are usually found in vicinity of threshold.], and shall cover entire range from 4 to infinity somewhat uniformly (on log scale) for large energies. The grid used for the computations is based on Chebyshev grid in ρ variables. With 40 = 1 ∞0 = -1 we introduced grid of δ_k = 1 + cos(2k-1/2nπ)/2 for n grid points (and δ_i ∈ (0,1)), which is used to construct a grid of values in s via relation s_k0 = e^i πδ_k . The resulting grid of points s_i has the desired properties mentioned before (large number of points around threshold, and rapidly increasing spacing between points for large s, allowing to decently probe the infinity). With pilot computations to establish sufficient grid size, we found no difference between computations made for n=250 and n=300 grid points in s, and smaller sizes, like n=100, n=150 giving different, and explicitly wrong answers. All computations presented in following section were made using grid of 300 values in s, constructed using algorithm above. The main building block of each amplitude is the ρ series, 𝒯̃ = ∑_a,b,cα_abcss_0^att_0^buu_0^c + ⋯ that must be terminated for real computations. With time complexity of problem growing like O(|_i|^3) <cit.>, the evaluation time sharply grows with number of terms in ρ series. Truncating each ρ series by imposing α_abc = 0 for a+b+c > is how limiting the number of free coefficients is done. However, with overall time complexity O(^6), the line between almost impossible and fast and inexpensive SDPB computation is thin. This line, with hardware accessible for computations in this paper is around =40. The convergence with is something that needs to be discussed separately for each experiment. Other limitation is the number of spins included in (<ref>). The computation time grows linearly with number of spins included, so pushing to the safe side is not computationally prohibitive. The experiment in this paper show that = + 12 is a safe choice, and it is used for later numerics. The precision of numerical values is another important topic in bootstrap computations. Many term cancelations in computations need large precision floating point numbers, and the experiments described later are no exception. 512 bits of mantissa precision was found to be (safely) more than enough and was used in the numerics for this paper. § RESULTS §.§ A story of one particle The first performed experiment includes matter (particle A) and dilaton (particle B) without additional residues. The numerical data allowed to find a feasible range of ρ series size and number of partial waves included in unitarity constraints, and the convergence pattern is shown on figure <ref>. The close-to-linear behavior of a/ as a function of 1/ allows to extrapolate the limit of →∞, giving the answer of a ≳ 0.1517·. This can be concluded to be absolute minimum a-anomaly of any theory with a single particle of spin 0. As presented on a figure, the number of spins required for convergence depends on the ansatz size, however over entire range of feasible (as time complexity grows approximately as O(^6) and experiments with > 40 became prohibitively expensive), the difference between solutions found with = +12 and = +8 differed only marginally, proving = +12 is a safe choice for next experiments. The strength of 3-point couplings may be extracted from data. The residue of pole in 𝒯̃_AA→ AA saturates the previously found bounds <cit.>, however pole in 𝒯̃_BB→ BB does not contribute significantly to a-anomaly (see figure <ref>). On the other hand, one would expect that the parameter associated with bound state at threshold, 1/s4/3 would be saturating analytic bounds, as in scalar theories investigated in <cit.>. This is not the case, as optimized amplitude has associated parameter converging to about 0.15 of minimal value of -32√(6)π, as shown on a plot. The second of improvement terms, doesn't appear to converge to any value, similarly to elements of γ_abc, as expected with having an ansatz that is (to some extent) redundant. Both patterns are shown on figure <ref>. §.§ A story of two particles When considering resonances coming from exchange of massive particle X of mass m_X^2 > 1, the picture changes drastically. With square of mass m_X close to 1 or 4 the absolute minimum of a-anomaly is close to 0.15· found which was expected, as these contributions are similar either to pole at s=1 or to the threshold singularity. However, in between, the minimal value of anomaly dips at a ≈ 0.034· for the mass m_X^2 = 2.5±0.1. The behavior of three-point couplings (shown on figure <ref>) and threshold singularity term can be separated into two regions. For m_X^2 ≤ 2 the value of g_0 is close to 0, and |g_0'| (matter-matter-X) saturates the unitarity bound of 3-point coupling found in previous S-matrix bootstrap experiments<cit.>. Above m_X^2 = 2 the coupling |g_0| starts to grow rapidly, maximizing around m_X^2 = 2.5±0.1. The opposite applies to threshold singularity term ξ - it decays quickly from ≈ 0.15 of minimal value coming from unitarity bounds (ξ_min= -32 √(6)π) to decay to 0 at m_X^2 = 2. It looks like a pole below m_X^2 = 2 absolutely consumes pole at m_A^2=1, and pole above consumes threshold singularity, when it comes to minimization of a-anomaly. Somewhat similar behavior (of disappearance of resonance at mass of A) is observed at dilaton-dilaton-matter and dilaton-dilaton-X couplings, as plotted in figure <ref>. §.§ A story of many particles To approach a general theory containing at least a single particle of spin 0, a theory with many possible resonances is investigated. Instead of introducing single extra resonance of mass m_X, a set of them is included in the ansatz, with masses[The resonance at m_i = 3.0 is missing due to a numerical error.] m_i^2 = 1.1, 1.2, …, 3.8, 3.9 with respective 3-point couplings g_0(m_i) and g_2(m_i), with A's and with dilatons respectively. Surprisingly, the absolute value of a-anomaly in such case can be extrapolated to a_min≈ 0.036 · with convergence pattern presented on the figure <ref>. The exact numerical result shall be taken with a grain of salt, as little data points were evaluated due to high complexity of computations. However, this value being so close to minimum of a-anomaly from previous section shall bring attention to 3-point couplings related to each of allowed intermediate particles. When taking a closer look on figure <ref>, one can notice the only significant contribution to the amplitudes results from resonances at mass m_i^2 = 2.4, m_i=2.5. The natural conjecture from this observation is that the non-trivial theory that really minimizes a-anomaly contains two stable particles, one of mass m_A (normalized to 1 in the experiment), and another with m_X^2 between 2.4 and 2.5. The further investigation, with more detailed grid of allowed resonances, shall eventually bring a definitive answer to question of minimal a-anomaly of non-trivial theory containing at least one spin-0 particle. § DISCUSSION The question of absolute minimum of a-anomaly of UV CFTs is far from answered. The research presented in this paper finds (finite, non-zero) bound on a-anomaly for range of theories including stable scalars, and with results, more questions arise. The data on figure <ref> suggest the scalar theory minimizing a maybe consists only of two particles, with mass related by m_X^2 ≈ 2.4 m_A^2, and more precise numerical experiments in terms of allowed masses m_X shall be considered. The possibility of existence of theories with higher-spin matter can neither be ruled out. However, none of such experiments can answer the question "What is this (a-minimizing) theory?", or what is the UV CFT corresponding to it. For now, the answer is unknown. § ACKNOWLEDGEMENTS The author is supported by the Swiss National Science Foundation through the project 200020_197160 and through the National Centre of Competence in Research SwissMAP. The author also received some support from the Simons Foundation grant 488649 (Simons Collaboration on the Non- perturbative Bootstrap). The author wants to thank Biswajit Sahoo, Denis Karateev and João Penedones for countless fruitful discussions, time, and guidance regarding the research presented in this paper. JHEP
http://arxiv.org/abs/2307.00782v1
20230703065503
ContextSpeech: Expressive and Efficient Text-to-Speech for Paragraph Reading
[ "Yujia Xiao", "Shaofei Zhang", "Xi Wang", "Xu Tan", "Lei He", "Sheng Zhao", "Frank K. Soong", "Tan Lee" ]
cs.CL
[ "cs.CL", "cs.AI", "eess.AS" ]
August 1, 2023 ================== While state-of-the-art Text-to-Speech systems can generate natural speech of very high quality at sentence level, they still meet great challenges in speech generation for paragraph / long-form reading. Such deficiencies are due to i) ignorance of cross-sentence contextual information, and ii) high computation and memory cost for long-form synthesis. To address these issues, this work develops a lightweight yet effective TTS system, ContextSpeech. Specifically, we first design a memory-cached recurrence mechanism to incorporate global text and speech context into sentence encoding. Then we construct hierarchically-structured textual semantics to broaden the scope for global context enhancement. Additionally, we integrate linearized self-attention to improve model efficiency. Experiments show that ContextSpeech significantly improves the voice quality and prosody expressiveness in paragraph reading with competitive model efficiency. Audio samples are available at: https://contextspeech.github.io/demo/ Index Terms: Text-to-Speech, Contextual Modeling § INTRODUCTION Deep learning is powerful for speech representation learning and has shown great results on Text-to-speech (TTS) tasks <cit.>. Representative neural network-based acoustic models in TTS evolve from autoregressive structures (, Tacotron <cit.>, Deepvoice <cit.>, TransformerTTS <cit.>) to non-autoregressive frameworks (, FastSpeech <cit.>, GlowTTS <cit.>) to achieve high quality generation efficiently. xiao2023contextspeechRecent end-to-end TTS models <cit.> develop the framework converting text to waveform directly without relying on an external vocoder <cit.>. Despite their effectiveness, we argue that existing manner of sentence-level speech synthesis is still insufficient to provide high-quality paragraph reading, in which the synthesized audio is created in paragraph-level, like news reading, audiobook, audio content dubbing, or even dialogue composed by multiple interrelated sentences. The key reason is that most TTS models fail to capture global context among sentences within the paragraph in synthesizing audio. They usually convert text to speech in sentence-level and concatenate them for paragraph reading. An underlying fact is omitted that: sentences within the paragraph are not isolated and have various dependencies with respect to speech and textual context. Regarding the large context variation in long-form content, concatenating synthesized speech sentence by sentence has noticeable performance gap to natural recording in paragraph reading from perceptual evaluation. Additionally, the imbalanced distribution of TTS corpus data with variable-length sentences, making it difficult for TTS systems to generate high quality synthesized speech for exceptionally long or short sentences. Leaving this fact untouched, previous modeling of sentence-level context for speech synthesis has key limitations: * Correlation between adjacent sentences. For paragraph reading, adjacent sentences influence each other naturally as the semantic information flowing. Thus, sentence-level speech synthesis lacks context coherence within the paragraph, and can hardly provide expressive paragraph reading. * Efficiency or consistency on extra-long sentences. Synthesizing extra-long sentences usually leads to unstable results (e.g. bad alignment between text and speech) and high latency. Generally, such sentences are partitioned into segments and then synthesized separately, which may cause inconsistent speech rate or prosody. * Quality on extra-short sentences. With the data scarcity of extra-short sentences (, consisted by one or two words) in corpus, TTS easily sacrifices the performance on such pattern with bad pronunciation or extremely slow speech rate. In light of the above limitations, this work aims to study the paragraph TTS by exploring the global-level semantic dependency across different sentences. By doing so, the information transfer is enabled among sentences with variable lengths. Having realized the vital role of global context-enhanced paragraph TTS, it may suffer from scalability issue when performing speech synthesis on long paragraphs with complex cross-sentence dependency modeling. To tackle the challenges, we propose and make the following contributions: * To preserve cross-sentence dependency from model perspective, a memory-cached recurrence mechanism is incorporated to transfer knowledge between segments based on the cached hidden state. We use one of the state-of-the-art sentence-level speech synthesis architecture, Conformer <cit.> based TTS in <cit.>, as our backbone model. The cached hidden state of each Conformer block in both encoder and decoder brings text and speech information from the previous segment. * Inspired by the context-aware conversational TTS <cit.>, we propose a new text-based contextual encoder to broaden the model horizon from sentence to paragraph. In particular, the proposed contextual encoder takes text-based features (, BERT <cit.>-based embedding, pre-defined statistical textual information) as input and integrate them with phoneme embedding. Such integration covers information from history to future and alleviate the one-to-many mapping issue in TTS. * To reduce the memory and computation cost, we integrate the linearized self-attention with permute-based relative position encoding under our memory reused framework, so as to avoid quadratic complexity caused by softmax self-attention. Experiments are carried out on a speech corpus of Chinese audiobook. The results show that ContextSpeech can generate more expressive and coherent paragraph audios compared with baseline ConformerTTS model in terms of objective and subjective evaluation. From the observation, it also alleviates the issues caused by extra-long and extra-short sentences obviously. Additionally, the final model largely alleviate the efficiency issue of extra-long input compared with baseline model. § METHODOLOGY In this section, we present the details of our model whose architecture overview is shown in Figure <ref>. §.§ ConformerTTS with Memory Reuse §.§.§ Backbone Model Our TTS framework is built upon the backbone model ConformerTTS, which adopts the Conformer Block (CB) in both encoder and decoder <cit.> of a FastSpeech2-like framework. As shown in Figure <ref>-(c), the CB integrates a Convolution Module (ConvM) and a Multi-Head Self-Attention (MHSA) to model the local correlation and the global interaction. Additionally, a Convolution based Feed-Forward Network (ConvFFN) is attached after the self-attention for encoding the correlation between adjacent hidden states. More precisely, the ConvM is composed of four stacked components, including a convolutional feed-forward module, a gated linear unit (GLU), a depthwise convolution module and another convolutional feed-forward module. Let N be the number of CB stacked in encoder (or decoder), the input feature of the n-th CB is represented as H_t^n = [h_t,1,...,h_t,L], where t is the index of current sequence and L is the sequence length. In summary, the overall framework of baseline model used in this paper 1) are demonstrated in Figure <ref>-(b) by ignoring the cached information 2) and consumes a softmax-based MHSA <cit.> in CB . §.§.§ Segment-level Memory Reuse Inspired by  <cit.>, we cache the hidden state of previous segment in each layer and reuse it with current segment for involving contextual information, as shown in Figure <ref>-(b). Notice that, the preceding segment is configured with a fixed length while a complete sentence is used as the current segment. By doing so, we can retain more intact semantic and acoustic information from both text and speech. Instead of reusing the input feature of MHSA, we choose to cache the input feature of CB directly since the ConvM can help in capturing the contextual information around the concatenation point. As the output of the n-th block is the input of the (n+1)-th block when n < N, the hidden state can be represented as Eq.(1), where SG(·) means stop-gradient and the notation [A∘ B] indicates concatenating hidden sequences A and B along the length dimension. H_t^n+1 = [SG(H_t-1^n+1) ∘ ConformerBlock (H_t^n)] §.§ Text-based Contextual Encoder Given the same sentence with different context, prosody of the generated speech would be different. Modelling contextual information by incorporating external linguistic and semantic features would benefit the TTS voice quality  <cit.>. In this section, we introduce a text-based contextual encoder to enhance the prosody expressiveness and coherence for paragraph reading. The framework is illustrated in Figure <ref>-(a). Given a paragraph with a predefined context range c (sentence number in a paragraph), the contextual encoder processes it to extract two kinds of contextual representations as described below: * Token-based contextual representation. The current sentence is used to extract token- level[In our token extraction, for Chinese, "token" means "character", for English, “token” means “subword”.] Bert <cit.> embedding (TBE) and token-level statistical features (TSF). The token-level statistical features are listed in Table <ref>, where k, s and p denote token, sentence and paragraph. For example, i_k_s means the index of current token in the sentence, n_s_p means the number of sentence in the original paragraph text, and max(n_k,s) means the maximum token number in a sentence over the training data. After concatenation, the TBE and TSF will be up-sampled and go through convolution and projection layers to align with phoneme-level features. * Sentence-based contextual representation. For each sentence in the input paragraph, the sentence-level Bert embedding is extracted to construct a paragraph-level contextual representation (PCR) by GRU. After that, the concatenation of PCR and the current sentence embedding is fed into a projection layer and then up-sampled to phoneme-level. The generated token-based and sentence-based contextual embedding will be added into the phoneme embedding of current sentence. With the above design, our contextual encoder not only broadens the horizon of current phoneme to global paragraph context by incorporating paragraph-level statistical features, but also improves the encoder expressiveness with phoneme embedding enhanced hierarchical contextual features. §.§ Efficient Self-Attention Mechanism The self-attention module brings the effectiveness but also limits the model efficiency due to the quadratic time and memory complexity. Efficient Transformers <cit.> are proposed to improve the model efficiency on long-form input. Linearized self-attention is a kernel based method that can significantly reduce the computation time and memory footprint. §.§.§ Linearized Self-Attention Let X∈ℝ^L × d be the input of self-attention module, Q=W_q · X, K=W_k · X and V=W_v · X are linear transformations on the X. The canonical softmax-based self-attention mechanism can be presented as 𝒜(Q,K,V) = softmax(QK^T/√(d))V, where the time and memory complexity is quadratic according to the input length. Refer to <cit.>, the attention matrix can be generalized as a similarity function of Q_i and K_j, the i-th or j-th row of the matrix Q and K, as Eq.(2). The similarity function can be any other attention functions that are non-negative. 𝒜(Q_i,K,V) = ∑_j=1^L sim(Q_i,K_j) V_j /∑_j=1^L sim(Q_i, K_j) Given a qualified kernel function ϕ(x), the generalized row-wise attention matrix can be rewritten as Eq.(3). According to the associative property of matrix multiplication, ϕ(Q_i)^T can be taken out of the summation formula both in numerator and denominator as Eq.(4). Thus, we can compute the summation formula part in advance and reuse them for each query. 𝒜(Q_i,K,V) = ∑_j=1^L ϕ(Q_i)^T ϕ(K_j) V_j/∑_j=1^L ϕ(Q_i)^T ϕ(K_j) = ( ϕ(Q_i)^T ∑_j=1^L ϕ(K_j) V_j ) / ( ϕ(Q_i)^T ∑_j=1^L ϕ(K_j) ) §.§.§ Permute-based Relative Position Encoding To endow the linearized self-attention with the awareness of relative positional information, we applied a permute-based relative position encoding as in <cit.>. Particularly, the sim(Q_i, K_j) in Eq.(2) will be converted to permute based format as Eq.(5). r is set as 1 to avoid exploding as the sequence length increases. A premutation B: {1,2,...,d} → {1,2,...,d} is generated randomly, where d is the dimension of query or key. Here, the first {1,2,...,d} and the second {1,2,...,d} can be treated as index collections with different order. P_B is the premutation matrix of B, where P_B, ij=1 if B(i)=j; otherwise P_B, ij=0. sim_p(Q_i, K_j) = ( r_i P_B^i ϕ(Q_i) )^T ( r^-j P_B^j ϕ(K_j) ) § EVALUATION §.§ Experimental Setup Dataset. We perform experiments on an expressive Chinese male voice. The dataset is an audiobook corpus composed of around 70 hours (∼35, 000 sentences) of narration speech and the corresponding text transcripts. We left out 100 paragraphs from the same book for objective evaluation and construct 3 different paragraph test sets from other books for subjective evaluation. Set-A: 50 paragraphs with sentences in normal length, which are used to evaluate the overall model performance on paragraph reading. Set-B: 50 paragraphs with sentences of extra-short length, i.e., one or two words, to see if the model alleviate the robustness issue in extra-short sentences. Set-C: 10 paragraphs with incremental sentence number from 2 to 11, to test the model efficiency on extra-long input sentences. Model Configuration. The ConformerTTS related model configuration is consistent to the settings in <cit.>. The cached memory length is set as 128 and 64 for encoder and decoder, respectively. For the text-based contextual encoder, the context size c is set as 11, , 5 sentences before and after the current sentence. The [input_dim, output_dim, kernel_size] of the convolution layer is [774, 384, 5], which is followed by RELU, layer norm, dropout with rate 0.5 and a transformation layer with dimension ℝ^384 × 384. The GRU layer with dimension ℝ^384 × 384 is followed by a RELU, dropout with rate 0.5 and a linear layer with size ℝ^768 × 384. The kernel function used in linearized self-attention is ϕ(x)=elu(x)+1. We used MelGAN as the vocoder to generate audio from mel-spectrograms. Evaluation Protocol. We conduct paragraph MOS (mean opinion score) test to evaluate the overall voice performance of our method considering both recording and baseline model. 25 native speakers listen to each audio and give a score in 10-point scale on the overall performance and 8 specific metrics. Paragraph CMOS (comparative mean opinion score) test is used to compare the proposed model with the baseline model on different test sets. 15 native speakers listen to the synthesized samples from two models, compare them side by side and give a score from -3 to +3, where the baseline model is set as 0 for reference. Additionally, we propose a group of objective metrics to evaluate model performance according to recordings with the same transcripts, including pitch, intensity, duration and pause. For model efficiency evaluation, we conduct training on 8 NVIDIA V100 GPUs and inference on 1 NVIDIA Tesla K80 GPU. §.§ Quality on Paragraph Reading Subjective Evaluation. We conduct a paragraph MOS test on Set-A for our model along with baseline and recording. Figure <ref> shows the result in terms of overall impression and other 8 specific metrics. The ContextSpeech outperforms the baseline model in all cases and achieves high-quality speech close to the recording in term of overall impression ([email protected]). Specifically, the proposed model reduces the MOS gap with recording from 0.25 to 0.06 compared with baseline model, which is around 76% improvement. Especially for voice pleasantness, emotion, style matchiness and listening effort, ContextSpeech shows significant improvement with more than 50% MOS gap reduction for expressive paragraph reading. Objective Evaluation. Besides the subjective evaluation, we also calculate the prosody-related objective metrics to measure the similarity between synthesized voice and 100 paragraph recordings. Table <ref> shows that ContextSpeech achieves improvement in each objective metric compared with baseline model, which also verifies the model performance superiority in paragraph-level prosody expressiveness. §.§ Robustness on Extra-short Sentences As mentioned in Section 1, extra-short sentences (one or two words) handled by sentence-level speech synthesis model usually suffer from the robustness issue, such as bad pronunciation and low speech rate. Therefore, we conduct paragraph CMOS test on Set-B. Setting the score of Baseline model as 0 for reference, obtains 0.107 CMOS gain. Both the bad pronunciation issue in one-word sentences and low speech rate issue in two-word sentences are effectively alleviated. Figure <ref> shows the mel-spectrogram samples to compare and baseline in handling one-word sentences. The red rectangles mark the position of the sentence with only one word in the paragraph. It is obvious that the spectrogram of baseline model in that position is muffle (Figure <ref>), while that of ContextSpeech model is much clearer with complete formant (Figure <ref>). By listening to the audios, we also notice that the pronunciation of that one-word sentence is distorted in baseline paragraph but clear in the paragraph. §.§ Efficiency on Extra-long Sentences The efficient self-attention module described in Section 2.3 largely improves the model efficiency. For training stage, with linearized self-attention achieves 2x of speedup and 2x of memory tolerance compared with using softmax-based self-attention. For inference stage, shows significant efficiency superiority over the baseline especially for extra-long inputs. Table <ref> illustrates the inference latency for baseline and ContextSpeech model according to different input phoneme length on Set-C. The baseline model run into out-of-memory when the input phone number increase to 1574. In contrast, is able to handle such long sequences. Additionally, outperforms the baseline in each group and achieves more than 10x speedup when the input length is 1506. Furthermore, we perform paragraph CMOS on this test set and obtain 0.226 CMOS gain (Table <ref>). In summary, for extra-long input sentence, ContextSpeech shows better expressiveness and efficiency compared with baseline. §.§ Model Ablation Study We conduct ablation study to evaluate the effectiveness of key modules in . Table <ref> shows the paragraph CMOS results on Set-A component-wise ablation results. Memory Recurrence (MR). Memory reuse mechanism described in Section 2.1 is proposed to enlarge the receptive field of current segment to see more historical information. To verify its effectiveness, we remove it from model and do a paragraph CMOS test for comparison. Set model as 0 for reference, removing MR cause -0.085 regression, which demonstrates the contribution from MR mechanism. Text-based Contextual Encoder (TCE). As described in Section 2.2, we proposed a text-based contextual encoder to leverage hierarchical contextual information from plain paragraph text. To evaluate its effectiveness, we do paragraph CMOS test to compare the models with and without TCE module. The negative score -0.048 verifies the positive effect of TCE module. Efficient Self-Attention Mechanism (ESA). ESA is introduced in Section 2.3, which aims to improve model efficiency and robustness especially on extra-long input. The efficiency improvement and corresponding performance on extra-long input are proved in Section 3.4. Here we replace the ESA in by softmax-based self-attention with relative position encoding in Transformer-XL, to evaluate the performance in paragraphs with normal-length sentence. The paragraph CMOS result, -0.030, demonstrates that the ESA module will not cause quality regression and even with slight improvement. § CONCLUSION In this paper, we propose ContextSpeech, which is an expressive and efficient TTS model for generating speech of paragraph reading. The memory reuse mechanism is introduced in the encoder-decoder framework to incorporate historical information of text and speech to current sentence. Text-based contextual information is encoded in a hierarchical structure to extend the model capability to paragraph level. Furthermore, linearized self-attention with compatible relative position encoding is adopted to improve the model efficiency. Experiments on Chinese audiobook corpus demonstrate that ContextSpeech achieved superior voice quality and expressiveness in paragraph reading compared with the baseline model, 76% reduction on the MOS gap to recording. ContextSpeech also shows robustness performance on extra-short sentences with 0.107 CMOS gain, and improves both the expressiveness (0.226 CMOS gain) and efficiency (∼10x speedup) over the extra-long sequences. IEEEtran
http://arxiv.org/abs/2307.01517v1
20230704065815
New Designs of Robust Uplink NOMA in Cognitive Radio Inspired Communications
[ "Yanshi Sun", "Wei Cao", "Momiao Zhou", "Zhiguo Ding" ]
cs.IT
[ "cs.IT", "math.IT" ]
New Designs of Robust Uplink NOMA in Cognitive Radio Inspired Communications Yanshi Sun1, Wei Cao1, Momiao Zhou1 and Zhiguo Ding2 1School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China 2 Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE, and Department of Electrical and Electronic Engineering, University of Manchester, Manchester, UK. Email: 1{[email protected], [email protected],[email protected],}, [email protected] ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper considers a cognitive radio inspired uplink communication scenario, where one primary user is allocated with one dedicated resource block, while M secondary users compete with each other to opportunistically access the primary user's channel. Two new designs of NOMA schemes, namely hybrid successive interference cancellation with power adaptation (HSIC-PA) and fixed successive interference cancellation with power adaptation (FSIC-PA), are proposed. The significant advantages of the proposed schemes are two folds. First, the proposed two schemes can ensure that the secondary users are opportunistically served without degrading the transmission reliability of the primary user. Besides, the transmission robustness of the served secondary users can be guaranteed. Specifically, the outage probability error floors can be avoided for the secondary users, which is proved by asymptotic analysis in the paper. Extensive simulation results are also provided to demonstrate the superior performance of the proposed schemes. Non-orthogonal multiple access, successive interference cancellation (SIC) decoding order, quality of service (QoS), dynamic power control. § INTRODUCTION Non-orthogonal multiple access (NOMA) has obtained extensive attention in both academia and industry, due to its higher spectral efficiency and capability to support more users compared to conventional orthogonal multiple access (OMA) technique <cit.>. The key idea of NOMA is to encourage multiple users to simultaneously occupy one channel resource block, which is not allowed in OMA. Consequently, how to address inter-user interference is one of key issues in NOMA aided systems. To this end, a widely used method in NOMA to address inter-user interference is successive interference cancellation (SIC), where the users' signals are decoded in a successive manner <cit.>. The superiority of NOMA in future wireless communication network has been deeply investigated, as well as its compatibility with other advanced technologies, such as multiple input multiple output (MIMO) <cit.>, millimeter wave (mmwave) <cit.> and Terahertz (THz) communications <cit.>, reconfigurable intelligent surfaces (RIS) <cit.>, satellite communications <cit.> and so on. However, for uplink NOMA, there's an unfavorable feature that outage probability error floors exist in the existing schemes. The error floor means that the transmission reliability can not be arbitrarily high as transmit power increases, which significantly limit the application of NOMA in many practical scenarios. It was thought that the outage probability error floors are unavoidable in uplink NOMA. However, recent studies show that the conventional cognition is not correct. Specifically, a new design of SIC namely hybrid SIC (HSIC) is proposed for cognitive radio inspired uplink NOMA <cit.>. In the proposed HSIC scheme, the decoding orders of users are dynamically determined according to the relationship between the instantaneous channel conditions and users' target rates. <cit.> show that the proposed HSIC scheme can avoid outage probability error floors, under some constraints on users' target rates. The most important contributions of the series studies in <cit.> are two folds. First, <cit.> show that it is possible to avoid outage error floors, at least under some specific conditions. Second, the work in <cit.> indicates the importance of introducing HSIC to improve transmission robustness of uplink NOMA, while most existing work on NOMA applies fixed SIC order, either based on the channel state information (CSI) <cit.> or quality of service (QoS) requirement of users<cit.>. However, as mentioned above, the proposed scheme in <cit.> can only avoid outage probability error floors under some stringent conditions on users' target rates, which may not be met in many realistic scenarios. Thus, it is natural to ask the following two questions. The first question is that is it possible to avoid outage probability error floors without any constraints on users' rates? And the second question is that is it necessary to apply HSIC to avoid outage probability error floors? This paper aims to answer the above questions, by considering a cognitive radio inspired uplink NOMA scenario. In the considered scenario, one primary user is allocated with one dedicated channel resource block, while there are M secondary users who compete with each other to opportunistically share the primary user's resource block without degrading the outage performance of the primary user. Two new designs of NOMA schemes, namely HSIC with power adaptation (HSIC-PA) and fixed SIC with power adaptation (FSIC-PA) are proposed. Both schemes can avoid outage probability error floors without any conditions on users' target rates. The main contributions of this paper are listed as follows. * Two novel designs of uplink NOMA schemes are proposed, namely HSIC-PA and FSIC-PA[Note that the HSIC-PA scheme extends the scheme proposed in our previous work <cit.> where only two users are considered, while the FSIC-PA scheme hasn't been proposed according to our best knowledge.]. In the proposed HSIC-PA scheme, the decoding order of the secondary user can be dynamically changed according to the channel conditions. While in the proposed FSIC-PA scheme, the decoding order of the secondary user is fixed at the second stage of SIC. Asymptotic analysis for the outage probabilities of the served secondary users achieved by the proposed two schemes are provided, which shows that both schemes can avoid outage probability error floors without any constraints on users' target rates. The fact that the proposed FSIC-PA scheme can avoid error floors indicates that HSIC is not necessary to avoid error floors. * Numerical results are presented to demonstrate the superior performance of the proposed HSIC-PA scheme and FSIC-PA scheme, by comparing with the benchmark scheme termed HSIC-NPA proposed in <cit.>. It is shown that FSIC-PA scheme performs better than HSIC-NPA scheme in the high SNR regime, but worse in the low SNR regime. Moreover, HSIC-PA scheme performs best among three schemes at all SNRs, which shows the importance of the combination of HSIC and PA in the design of uplink NOMA transmissions. § SYSTEM MODEL Consider an uplink NOMA communication scenario with one base station (BS), one primary user U_0 and M secondary users U_m, 1≤ m≤ M. Note that, in the considered scenario, ensuring the transmission reliability of U_0 is of the first priority, which has a preset target data rate denoted by R_0. In conventional OMA based schemes, the primary user is allocated with one dedicated resource block, which cannot be accessed by other users. While in the considered NOMA schemes of this paper, M secondary users can compete with each other to opportunistically access the channel resource block which is allocated to the primary user. Note that allowing secondary users to share the channel resource block of the primary user must be done in such a way that the QoS of the primary user U_0 is not degraded. The channel gain of the primary user U_0 is denoted by g, and the channel gains of the secondary users are denoted by h_m, 1≤ m≤ M. In this paper, g and h_m are modeled as normalized Rayleigh fading gains, which means that g and h_m are independent and identically distributed (i.i.d) circular symmetric complex Gaussian (CSCG) random variables with zero mean and unit variance, i.e., g∼𝒞𝒩(0,1) and h_m ∼𝒞𝒩(0,1). The transmit power of the primary user U_0 is denoted by P_0. The transmit power of the secondary user U_m is denoted by β P_s, where β∈ [0,1 ] is the adjustable power adaptation coefficient of U_m, and P_s is the maximum power of U_m. For simplicity, the background noise power is normalized to be 1 throughout the paper. In the rest of the paper, the M secondary users are ordered according to their channel gains, i.e., | h_1 | ^2< ⋯ < | h_M|^2. In this paper, two novel NOMA schemes are proposed, namely HSIC-PA scheme and FSIC-PA scheme. It will be shown that both schemes can avoid outage probability error floors. For each scheme, in each period of transmission, only the secondary user which can achieve the largest instantaneous achievable rate is allowed to transmit signal by sharing the primary user's resource block. The proposed two schemes are described in the next two subsections. §.§ HSIC-PA Scheme To begin with, define an interference threshold denoted by τ (g) as follows: τ (g)= max{ 0,P_ 0 |g | ^2 /2^R_ 0 -1 -1}. Note that τ(g) can be interpreted as the maximum interference, with which U_0 can still achieve the same outage performance as in OMA where the resource block is solely occupied by U_0. For more details on τ(g), please refer to <cit.>. For each secondary user U_m, its instantaneous achievable rate is determined by how its channel gain compares to τ (g), which can be classified into the following two types: * Type I: the received signal power of U_m at the BS is less than or equal to τ(g), i.e., P_s | h_m |^2 ≤τ (g). For this case, putting U_m at the second stage of SIC can yield larger rate compared to putting U_m at the first stage of SIC, and will not hinder the primary user from successfully decoding its signal. Thus, it is favorable to decode U_m's signal at the second stage of SIC, and the achievable rate of U_m is given by R_1^m=log(1+P_ s | h_ m |^2) . * Type II: the received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, the benchmark scheme termed HSIC-NPA which is proposed in <cit.> only considers the case where β is set to be 1. Thus, to not degrade the QoS of U_0, U_m can only be decoded at the first stage of SIC in HSIC-NPA, yielding the following achievable rate of U_m: R_II,1^m=log(1+P_s | h_m |^2 /P_0 | g | ^2+1 ). Note that the drawback of putting U_m at the first stage of SIC is that, when P_0|g|^2 is large, R_II,1^m might still be small even with a large P_s | h_m |^2. To this end, the proposed HSIC-PA scheme offers an additional choice where β can be set to be less than 1 so that β P_s|h_m|^2=τ(g), which can provide opportunity to yield a larger achievable rate. As a result, U_m's signal can be decoded at the second stage of SIC, yielding the following achievable rate of U_m: R_2,2^m=log(1+τ(g)). Thus, in the proposed HSIC-PA scheme, when P_s | h_m |^2 > τ(g), the achievable rate of U_m is given by: R_2^m=max{R_2,1^m,R_2,2^m}. According to the above discussions, the achievable rate of U_m in HSIC-PA scheme can be concluded as: R^m= R_1^m, P_s | h_ m |^2 ≤τ(g) R_2^m, P_s | h_ m |^2 >τ(g). §.§ FSIC-PA Scheme In this subsection, another scheme termed FSIC-PA is introduced. Note that in HSIC-PA scheme, the secondary user's signal can be decoded either at the first or second stage of SIC. However, in FSIC-PA scheme, the secondary user can only be decoded at the second stage of SIC. In FSIC-PA scheme, for each secondary user U_m, its instantaneous achievable rate can also be determined by considering the following two cases as in the last subsection. * Type I: the received signal power of U_m at the BS is less than or equal to τ(g), i.e., P_s | h_m |^2 ≤τ (g). For this case, it is as same as in the HSIC-NPA and the proposed HSIC-PA scheme, where U_m is decoded at the second stage of SIC. Thus, the achievable data rate of U_m is R̂^m_I =log(1+P_s|h_m|^2), since the interference from U_0 can be removed by SIC. * Type II: the received signal power of U_m at the BS is larger than τ (g), i.e., P_s | h_m |^2 > τ (g). For this case, in the proposed FSIC-PA scheme, U_m can only be decoded at the second stage of SIC. To carry out this strategy, β is set to be less than 1 so that β P_s|h_m|^2=τ(g). Thus, the achievable rate of U_m for type II is R̂_II^m=log(1+τ(g)) By concluding the above two cases, the achievable rate of U_m in the FSIC-PA scheme can be expressed as: R̂^m=R̂^m_I, P_s | h_ m |^2 ≤τ(g) R̂^m_II, P_s | h_ m |^2 >τ(g). Note that, the proposed HSIC-PA and FSIC-PA schemes can ensure that the outage performance of the primary user is the same as that in the OMA scheme, where the resource block is occupied by the primary user only. Thus, this paper focuses on the performance of the opportunistically served secondary users. § ASYMPTOTIC PERFORMANCE ANALYSIS FOR HSIC-PA AND FSIC-PA Note that outage probability error floors were thought to be unavoidable in uplink NOMA. However, recent work proves that such cognition is wrong, by introducing the concept of hybrid SIC, which dynamically chooses the decoding order according to the users' channel conditions and quality of service requirements <cit.>. Even so, the outage probability error floors can only be avoided under some stringent conditions on users' target rates in the proposed HSIC-NPA scheme in <cit.>, which is not practical in many scenarios. In this section, it will be proved that both the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors without any constraints on users' target rates, which indicates the importance of power adaptation for improving transmission robustness of uplink NOMA. Due to the limited space, this paper only focuses on the asymptotic performance analysis for HSIC-PA and FSIC-PA, which is helpful to understand the proposed two schemes. The overall outage probability achieved by the served secondary users of HSIC-PA is defined as: P_out=Pr(max{R^m, 1≤ m≤ M}<R_s), and that of FSIC-PA scheme is defined as: P̂_out=Pr(max{R̂^m, 1≤ m≤ M}<R_s), where R_s is the target rate of the secondary users. For the ease of characterizing P_out and P̂_out, it is useful to define the event E_m, which denotes the event that there are m users belonging to type I, particularly, E_m can be expressed as follows: E_m={ |h_m |^2< τ (g)/P_s, | h_m+1 | ^2>τ (g)/P_s}, 1≤ m≤ M-1, {|h_1|^2 > τ (g)/P_s}, m=0, {|h_M|^2 < τ (g)/P_s}, m=M, where the extreme cases E_0 and E_M denote the events where there is no type I secondary users and all the secondary users belong to type I, respectively. The following two lemmas show that both HSIC-PA and FSIC-PA schemes can avoid outage probabilities error floors. In the high SNR regime, i.e., P_0→∞, P_s→∞, the overall outage probability of the served secondary users in HSIC-PA scheme approaches zero. The outage probability of HSIC-PA P_out can be rewritten as: P_out= ∑_m=1^M-1P( E_m,max{R^m_1,R^M_2} <R_s)_Q_1 +P ( E_0,R^M_2<R_s)_Q_2 +P ( E_M,R^M_1<R_s)_Q_3 In the following, it is shown that Q_1, Q_2 and Q_3 approach zero in the high SNR regime, respectively. For Q_1, it can be evaluated as follows: Q_1= ∑_m=1^M-1P( E_m,max{R^m_1,R^M_2} <R_s) (a)≤ ∑_m=1^M-1P( R^m_1<R_s) = ∑_m=1^M-1P(log(1+P_s|h_m|^2)<R_s) (b)≤ (M-1)P(log(1+P_s|h_1|^2)<R_s), where step (a) is obtained by following the fact that R^m_I<R_s and R^M_II<R_s, and step (b) is obtained by noting that the users are ordered as in (<ref>). It can be directly seen that Q_1 approaches zero in the high SNR regime. Therefore, Q_1→ 0 at hign SNR. Then, for Q_2, it can be rewritten as: Q_2= P ( E_0,R^M_2<R_s) = P( |h_1|^2>τ(g)/P_s,max{ R_2,2^M,R_2,1^M }< R_s) (c)= P(|g|^2<α_0,log(1+P_s|h_M|^2/P_0|g|^2+1)<R_s)_Q_2,1 +P( |g|^2>α_0,|h_1|^2>|g|^2α_0^-1-1/P_s,._Q_2,2 .max{log(|g|^2α_0^-1),log(1+P_s|h_M|^2/P_0|g|^2+1)}< R_s)_Q_2,2, where α_0=2^R_0-1/P_0 and step (c) is obtained by dividing the event into two cases, one is τ(g) = 0 and the other is τ(g) > 0. For Q_2,1, it can be obtained that: Q_2,1≤ P( |g|^2<α_0) =1-e^-α_0→ 0. For Q_2,2, it can be calculated as follows: Q_2,2= P( |g|^2>α_0,|h_1|^2>|g|^2α_0^-1-1/P_s,log(|g|^2α_0^-1)<R_s,. . log(1+P_s|h_M|^2/P_0|g|^2+1) <R_s) ≤ P(log(|g|^2α_0^-1)<R_s) =1-e^-2^R_sα_0→ 0, Therefore, it can be concluded that Q_2→ 0 at high SNR. Finally, Q_3 can be evaluated as follows: Q_3= P(P_s|h_M|^2<τ(g) ,log( 1+ P_s|h_M|^2) <R_s,|g|^2>α_0) =P(|h_M|^2<τ(g)/P_s,|h_M|^2<α_s, |g|^2>α_0) ≤ P( |h_M|^2<α_s) =1-e^-α_s→ 0, where α_s=2^R_s-1/P_s. Thus, Q_3→ 0 at high SNR . Since Q_1→ 0, Q_2→ 0 and Q_3→ 0, we have P_out→ 0 in the high SNR regime, and the proof is complete. Recall that the benchmark HSIC-NPA scheme <cit.> can avoid the outage probability error floor only if the following conditions apply: ϵ _0ϵ _s<1, where ϵ _0=2^R_0-1 and ϵ_s=2^R_s-1. However, the proposed HSIC-PA scheme can avoid error floor without any restrictions on the target rates of the primary user and the secondary users. Thus, we can conclude that it is possible to avoid outage probability error floors without any constraints on users' rates. In the high SNR regime, i.e., P_0→∞, P_s→∞, the overall outage probability of the served secondary users in FSIC-PA scheme approaches zero. The outage probability of FSIC-PA P̂_out can be rewritten as: P̂_out= P(|h_M|^2<τ(g)/P_s,log(1+ P_s|h_M|^2)<R_s,|g|^2>α_0)_F_1 +P(|h_M|^2>τ(g)/P_s,log(1+τ(g))<R_s,|g|^2 >α_0)_F_2 +P(|g|^2<α_0)_F_3. In the high SNR regime, it can be proved that F_1, F_2 and F_3 approach zero, as shown in the following. First, by noting that F_1≤ P(log(1+P_s|h_M|^2)<R_s), it can be easily proved that F_1 approaches zero. Then, F_2 can be evaluated as: F_2= P(|h_M|^2>τ(g)/P_s,α_0<|g|^2<2^R_sα_0) ≤ P(|g|^2<2^R_sα_0)=1-e^-2^R_sα_0→0. Finally, F_3 can be evaluated as: F_3=P(|g|^2<α_0)=1-e^-α_0→0. Thus, the outage probability of FSIC-PA P̂_out→ 0 in the high SNR regime. The above asymptotic analysis shows that the proposed FSIC-PA scheme can also avoid outage probability error floors without any constraints on ϵ _0 and ϵ _s, which indicates that it is not necessary to apply HSIC to avoid outage probability error floors. However, the HSIC-PA scheme performs better than FSIC-PA scheme as will be shown in the next section, which indicates the importance of the combination of HSIC and PA to improve the robustness of uplink NOMA. § NUMERICAL RESULTS In this section, simulation results are provided to demonstrate the performance of the proposed HSIC-PA and FSIC-PA schemes. Comparisons with the benchmark HSIC-NPA scheme <cit.> are also provided. Fig. <ref> shows the outage probabilities of the secondary users achieved by HSIC-NPA, HSIC-PA and FSIC-PA versus transmit SNR. As shown in the figure, for HSIC-NPA scheme, when R_0=1 BPCU, there is no outage probability error floor. However, when R_0=4 BPCU, the outage probability error floor exists. This observation is consistent with the conclusions in <cit.>, i.e., the error floor can only be avoided when ϵ_0ϵ_s<1. By contrast, the proposed HSIC-PA and FSIC-PA schemes can avoid outage probability error floors, since the outage probabilities achieved by both schemes continuously decrease as SNR increases. Fig. <ref> also shows that the HSIC-PA scheme performs best among the three schemes for all cases. However, FSIC-PA achieves larger outage probabilities than HSIC-NPA when R_0=1 BPCU, while for the case where R_0=4 BPCU, FSIC-PA performs better at high SNRs. Fig. <ref> shows the performance of the three schemes in terms of ergodic data rates achieved by the served secondary users. From the figure, it is shown that HSIC-PA scheme always achieves the largest ergodic rate among the three schemes, which is consistent with the observation in Fig. <ref>. Another interesting observation from Fig. <ref> is that the performance of FSIC-PA approaches that of HSIC-PA in terms of ergodic rate at high SNRs, while the performance of HSIC-NPA approaches that of HSIC-PA in terms of ergodic rate at low SNRs. This observation indicates that it is preferable to set the secondary user at the first stage of SIC and use full transmit power at low SNRs, while it is preferable to set the secondary user at the secondary stage of SIC and use partial transmit power at high SNRs. Fig. <ref> and Fig. <ref> show a more detailed comparison of the proposed two schemes with the benchmark HSIC-NPA scheme. Note that, if the served secondary user belongs to type I, then the three schemes, i.e., HSIC-PA, HSIC-NPA and FSIC-PA, achieve the same instantaneous rate. However, the three schemes differ from each other if the served secondary user belongs to type II. Fig. <ref> shows the probability that the served secondary user belongs to type II. It is shown that as SNR increases, the probabilities converge to a constant. When the served secondary user denoted by U_m^* belongs to type II, then the achievable rate of U_m^* can be denoted by R_II, R̂_II and R̅_II for HSIC-PA, FSIC-PA and HSIC-NPA scheme, respectively. For the comparison between HSIC-PA and HSIC-NPA, R_II≥R̅_II always holds. Thus, it is sufficient to characterize the probability of the event that R_II>R̅_II, which yields P^better in Fig. <ref> defined by: P^better= P( R̅_2<R_2, U_m^* is type II) /P(U_m^* is type II) . By contrast, for the comparison between FSIC-PA and HSIC-NPA, R̂_II can be either larger or less than R̅_II. Thus, it is necessary to consider both the probability that R̂_II>R̅_II (P̂^better in Fig. <ref>) and the probability that R̂_II<R̅_II (P̂^worse in Fig. <ref>) simultaneously, which are defined as: P̂^better =P( R̅_2<R̂_2, U_m^* is type II) /P(U_m^* is type II) , and P̂^worse=1-P̂^better, respectively. An interesting observation from Fig. <ref> is that the curves for P̂^better and P^better coincide. Fig. <ref> also shows that P̂^better and P^better increase with SNR, and approach 1 in the high SNR regime. While P̂^worse decreases with SNR and approaches 1 in the low SNR regime. The above observation can help to understand the phenomenon shown in Fig. <ref> and Fig. <ref>, and leads to the following suggestions for practical systems. On the one hand, at high SNR, it is preferable to apply power adaptation and put the secondary user at the second stage of SIC. On the other hand, at low SNR, it is better to decode the secondary user at the first stage of SIC. Fig. <ref> shows the power consumption of HSIC-PA and FSIC-PA schemes. Note that the HSIC-NPA scheme always chooses full power to transmit for the secondary users, i.e., β is always set to be 1, while β can be set to be less than 1 in the proposed HSIC-PA and FSIC-PA schemes. Thus, HSIC-NPA is more energy consuming than the proposed two schemes in this paper. From the figure, it can be observed that at low SNRs, β approaches 1 in HSIC-PA scheme and β approaches zero in FSIC-PA scheme. Besides, as SNR increases, β decreases in HSIC-PA scheme, while that in FSIC-PA scheme increases. More interestingly, the values of β for both schemes approach a constant in the high SNR regime, which indicate that HSIC-PA scheme degrades to FSIC-PA scheme at high SNRs. § CONCLUSION In this paper, two novel NOMA schemes, namely HSIC-PA and FSIC-PA have been proposed, where secondary users can opportunistically share the channel resource block with the primary user, without degrading the outage performance of the primary user compared to OMA. The asymptotic analysis has been provided to characterize the outage performance of the proposed two schemes. It has been shown that both schemes can avoid outage probability error floors for the secondary user. Extensive numerical results have been provided to demonstrate the performance of the proposed schemes. This paper has shown the importance of the decoding order of SIC and power control to improve the transmission reliability of NOMA. IEEEtran
http://arxiv.org/abs/2307.02541v1
20230705180002
The Multi-Wavelength Environment of Second Bologna Catalog Sources
[ "A. Paggi", "F. Massaro", "H. Peña-Herazo", "V. Missaglia", "A. Jimenez-Gallardo", "F. Ricci", "S. Ettori", "G. Giovannini", "F. Govoni", "R. D. Baldi", "B. Mingo", "M. Murgia", "E. Liuzzo", "F. Galati" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
Alessandro Paggi [email protected] 0000-0002-5646-2410]A. Paggi Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Torino, via Pietro Giuria 1, I-10125 Torino, Italy 0000-0002-1704-9850]F. Massaro Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Torino, via Pietro Giuria 1, I-10125 Torino, Italy INAF - Osservatorio Astrofisico di Torino, via Osservatorio 20, I-10025 Pino Torinese, Italy Consorzio Interuniversitario per la Fisica Spaziale (CIFS), via Pietro Giuria 1, 10125 Torino, Italy 0000-0003-0032-9538]H. Penã-Herazo Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino, Italy Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Puebla 72840, Mexico East Asian Observatory, Hilo, HI 96720, USA 0000-0001-8382-3229]V. Missaglia Institute of Astrophysics, Foundation for Research and Technology - Hellas, Voutes, 7110 Heraklion, Greece Istituto Nazionale di Fisica Nucleare, Sezione di Torino, via Pietro Giuria 1, I-10125 Torino, Italy INAF - Osservatorio Astrofisico di Torino, via Osservatorio 20, I-10025 Pino Torinese, Italy 0000-0003-4413-7722]A. Jimenez-Gallardo Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, 40122 Bologna, Italy 0000-0001-5742-5980]F. Ricci Dipartimento di Matematica e Fisica, Università Roma Tre, via della Vasca Navale 84, I-00146 Roma, Italy INAF - Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monte Porzio Catone, Italy 0000-0003-4117-8617]S. Ettori INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Gobetti 93/3, 40129 Bologna, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy 0000-0003-4916-6362]G. Giovannini Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, 40122 Bologna, Italy INAF - Istituto di Radioastronomia (IRA), Via P. Gobetti 101, 40129 Bologna, Italy 0000-0002-2831-7603]F. Govoni INAF - Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius, Italy 0000-0002-1824-0411]R. D. Baldi INAF - Istituto di Radioastronomia (IRA), Via P. Gobetti 101, 40129 Bologna, Italy 0000-0001-5649-938X]B. Mingo School of Physical Sciences, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK 0000-0002-4800-0806]M. Murgia INAF - Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047 Selargius, Italy 0000-0003-0995-5201]E. Liuzzo INAF - Istituto di Radioastronomia (IRA), Via P. Gobetti 101, 40129 Bologna, Italy Italian Alma Regional Center (ARC), Via Piero Gobetti 101, 40129 Bologna, Italy Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125 Torino, Italy We present the first results of the Chandra Cool Targets (CCT) survey of the Second Bologna Catalog (B2CAT) of powerful radio sources, aimed at investigating the extended X-ray emission surrounding these sources. For the first 33 sources observed in the B2CAT CCT survey, we performed both imaging and spectral X-ray analysis, producing multi-band Chandra images, and compared them with radio observations. To evaluate the presence of extended emission in the X-rays, we extracted surface flux profiles comparing them with simulated ACIS Point Spread Functions. We detected X-ray nuclear emission for 28 sources. In addition, we detected 8 regions of increased X-ray flux originating from radio hot-spots or jet knots, and a region of decreased flux, possibly associated with an X-ray cavity. We performed X-ray spectral analysis for 15 nuclei and found intrinsic absorption significantly larger than the Galactic values in four of them. We detected significant extended X-ray emission in five sources, and fitted their spectra with thermal models with gas temperatures ∼ 2 keV. In the case of B2.1 0742+31, the surrounding hot gas is compatible with the ICM of low luminosity clusters of galaxies, while the X-ray diffuse emission surrounding the highly disturbed WAT B2.3 2254+35 features a luminosity similar to those of relatively bright galaxy groups, although its temperature is similar to those of low luminosity galaxy clusters. These results highlight the power of the low-frequency radio selection, combined with short Chandra snapshot observations, to investigate the properties of the X-ray emission from radio sources. § INTRODUCTION Diffuse X-ray emission associated with radio sources, extending well beyond their host galaxies up to hundred of kpc scales, is known and observed since the first X-ray Uhuru <cit.> and Einstein <cit.> missions, and more recently with XMM-Newton <cit.> and Chandra <cit.> telescopes <cit.>. In the last decades, Chandra telescope has extensively studied the X-ray emission of high-redshift radio galaxies, often used as tracers of galaxy clusters with poor or moderately rich environments <cit.>, since extended X-ray emission from these sources can be due to the thermal radiation arising from the hot gas trapped by the gravitational attraction of giant galaxies or permeating the intergalactic medium <cit.>. Alternatively, when the extended X-ray emission in these sources shows a general alignment with the radio axis and/or is spatially coincident with radio structures, a significant contribution to its flux is expected to come from non-thermal processes, and in particular from inverse Compton (IC) scattering of the radio-emitting electrons. In radio lobes, the X-ray emission is generally interpreted as due to IC of these electrons on Cosmic Microwave Background (CMB) photons permeating the radio lobes <cit.>, while in radio hot spots the X-ray emission is believed to be dominated by synchrotron self Compton radiation <cit.>, that is, IC scattering of synchrotron photons by electrons in the radio jets that emitted the synchrotron photons in the first place <cit.>. Finally, X-ray emission in radio galaxies on ∼ 100-200 kpc scale can also be significantly contributed by IC from far-IR photons in galactic starbursts <cit.>. To investigate the nature of the extended X-ray emission surrounding radio sources and study their evolution, in the last decade we carried out the Chandra snapshot survey of the Third Cambridge catalog <cit.> to obtain X-ray coverage of the entire 3CR catalog <cit.>. Through this observational program, we found X-ray emission associated with radio jets <cit.>, hotspots <cit.> as well as diffuse X-ray emission from hot atmospheres and intra-cluster medium (ICM) in galaxy clusters <cit.>, extended X-ray emission aligned with the radio axis of several moderate and high redshift radio galaxies <cit.>, and the presence of extended X-ray emission spatially associated with optical emission line regions not coincident with radio structures <cit.>. In this work we present the first results from a Chandra snapshot survey performed on the Second Bologna Catalog (B2CAT) of powerful radio sources. The B2CAT <cit.>, listing about 10000 sources detected above 0.1 Jy (completeness above 0.2 Jy) at 408 MHz with Bologna Northern Cross Telescope between 21 and 40 declination, is well suited to study the properties of extra-galactic radio sources. As a low frequency radio selected sample, its selection criteria are unbiased with respect to X-rays and active galactic nuclei (AGN) viewing angle. From this catalog have been derived well studied samples of radio-loud active galaxies <cit.>, as well as radio loud quasars <cit.> and spiral galaxies <cit.>. The high sensitivity of the B2CAT with respect to other radio samples as e.g. the 3CR allowed to study the properties of low luminosity radio galaxies as Fanaroff-Riley I <cit.> radio galaxies, and the non-thermal properties of spiral galaxies and low luminosity quasars. In addition, the B2CAT spans a wide range in redshift and radio power, and it is augmented by a vast suite of ground and space-based observations at all accessible wavelengths (optical, ; and radio band between 1.4 and 8 GHz, ). This catalog represents an ideal sample to study the X-ray emission arising from jet knots, hotspots, and nuclei of radio sources, look for new galaxy clusters via the presence of extended X-ray emission unrelated to the radio structures <cit.>, and investigate observational evidence of AGN feedback with the hot gas in galaxies, groups, and clusters of galaxies <cit.>. The B2CAT therefore represents a powerful tool to optimize the Chandra Cool Targets (CCT[https://cxc.harvard.edu/proposer/CCTs.htmlhttps://cxc.harvard.edu/proposer/CCTs.html]) observing strategy, that is, observations acquired while the spacecraft performs pointings to avoid overheating (or excessive cooling) of various observatory sub-systems. The paper is organized as follows. A brief description of the B2CAT CCT sources observed to date is presented in Sect. <ref>. Chandra data reduction and analysis are presented in Sect. <ref>. Results on individual sources imaging and spectral analysis are presented in Sect. <ref>, Sect. <ref> and Sect. <ref>, while Sect. <ref> is devoted to our conclusions. Unless otherwise stated we adopt cgs units for numerical results and we also assume a flat cosmology with H_0=69.6 km s^-1 Mpc^-1, Ω_M=0.286 and Ω_Λ=0.714 <cit.>. § SAMPLE DESCRIPTION In the selection of B2CAT CCT survey targets we started from the B2CAT catalog excluding sources already observed by Chandra. Taking into account the ecliptic latitude cut (ℓ > 55), we then selected a sample of 3080 sources. We stress that, due to the serendipitous nature of the CCT program, large samples are required to perform such observations, and that the proposed sources will be observed randomly. Finally, for this survey we applied for snapshot (16 ks) observations, following the same approach used for the Chandra 3C survey <cit.>. The sample of radio sources discussed in this work is constituted by the first 33 B2CAT targets observed by Chandra during the CCT survey up to June 2023. The main properties of these sources are presented in Table <ref>. Redshift measurements are available for only seven sources. In addition to the newly obtained Chandra data, we collected multi-wavelength radio data for the sources in the sample. In particular, to investigate the correlation between the diffuse X-ray emission and the extended radio structures, we collected 74 MHz Karl G. Jansky Very Large Array (VLA) data obtained through the VLA Low-frequency Sky Survey Redux <cit.>[VLSSr images cover an area of ∼ 30,530 square degrees with a resolution of 75, and an rms of ∼ 0.1 Jy/beam.], 145 MHz Low Frequency Array <cit.> observations from the forthcoming Data Release 2 (DR2) of LoTSS[The DR2 v2.2 was run as part of the ddf pipeline (https://github.com/mhardcastle/ddf-pipelinehttps://github.com/mhardcastle/ddf-pipeline) and the LoTSS DR1 consists of images at 6 resolution and ∼ 70 Jy/beam sensitivity covering an area of ∼ 400 square degrees while the footprint of the DR2 covers an area of approximately 5700 square degrees, both performed in the northern hemisphere.] processed by the international LOFAR collaboration as part of the LOFAR Data Release 1 and 2 (, and , respectively), 150 MHz Giant Metrewave Radio Telescope (GMRT) data obtained from the TIFR GMRT Sky Survey <cit.>[TGSS images cover an area of ∼ 36,900 square degrees, and have a resolution of 25 for Dec > 19 and of 25 /cos(Dec-19) for Dec < 19, with a median rms of ∼ 3.5 mJy/beam.], 1.4 GHz VLA data obtained through the NRAO VLA Sky Survey <cit.>[NVSS images cover the entire north sky above -40 Dec with a resolution of 45, and an rms of ∼ 450 Jy/beam.], and 3 GHz VLA data obtained through the VLA Sky Survey <cit.>[VLASS images cover an area of ∼ 33,885 square degrees with Dec >-40 with a resolution of 2 5, with an rms of ∼ 120 Jy/beam for the single epoch observations and of ∼ 70 Jy/beam for the three combined epochs.]. In Appendix <ref> we report the complete set of radio images (see Fig. <ref>) available in the aforementioned surveys for the B2CAT sources considered here, together with estimates of the radio flux for different radio structures (see Table <ref>). § DATA ANALYSIS Chandra observations of B2CAT sources were retrieved from Chandra Data Archive through ChaSeR service[http://cda.harvard.edu/chaserhttp://cda.harvard.edu/chaser] (see Table <ref>). They consist of ACIS-S snapshot observations with nominal exposure time of 16 ks, performed between April 2019 and June 2023 in VFAINT mode. These data have been analyzed with the Chandra Interactive Analysis of Observations <cit.> data analysis system version 4.14 and Chandra calibration database CALDB version 4.9.8, adopting standard procedures. The observations were filtered for time intervals of high background flux exceeding 3σ above the average level with deflare task, to attain the final exposures listed in Table <ref>. Field point sources in the 0.3-7 keV energy band were detected with the wavdetect task, adopting a √(2) sequence of wavelet scales (i.e., 1, 1.41, 2, 2.83, 4, 5.66, 8, 11.31 and 16 pixels) and a false-positive probability threshold of 10^-6. Given the relatively short exposure times and the consequent low statistics, we did not correct the absolute astrometry of the Chandra ACIS-S images and did not register them to radio maps, as the typical shift for Chandra images found during the 3C Chandra Snapshot Survey is 0 5 <cit.>. We produced broad (0.3-7 keV), soft (0.3-3 keV), and hard (3-7 keV) band Chandra images centered on ACIS-S chip 7. We also produced Point Spread Function (PSF) maps (with the mkpsfmap task), effective area corrected exposure maps, and flux maps using the flux_obs task (see Fig. <ref>). The image pixel sizes and the σ widths of the Gaussian kernel used for smoothing are listed in Table <ref>. §.§ Imaging Analysis We first proceeded searching for nuclear detections in the broad 0.3-7 keV band images. Using the higher resolution VLASS data, we defined 2” circular regions coincident with the core emission as identified in the VLASS images. If source cores were not clearly detected in VLASS images, we tentatively identified as source cores those compact X-ray regions lying at the center of the radio structures (see Sect. <ref>). We evaluated the nuclear X-ray fluxes making use of the srcflux CIAO task that evaluates the PSF corrections and the detector effective area and response function at the source location, assuming a power-law spectrum with 1.8 slope - as usually observed in AGN nuclear emission - and taking into account the photo-electric absorption by the Galactic column density along the line of sight <cit.>. The nuclear fluxes are listed in Table <ref>. With this procedure, we confirmed nuclear detections for 28 out of 33 sources, with 19 being detected at least at 3 σ significance. For seven other sources (B2.2 0143+24, B2.1 0241+30, B2.1 0455+32B, B2.1 0455+32C, B2.2 0775+24, B2.4 1112+23, and B2.4 2054+22B) we obtained a 2 σ significance detection of nuclear emission, for two sources (B2.3 0516+40 and B2.2 0038+25B) we obtained a marginal 1 σ significance detection of the nuclear emission, and for two sources (B2.2 1338+27 and B2.3 2334+39) we were only able to put a 1σ upper limit on the nuclear flux. For the remaining three sources (B2.1 0302+31, B2.2 1439+25 and B2.2 2133+27), since we do not have any clear indication - neither in radio nor in X-ray data - of the location of the core, we do not report any nuclear flux estimate. We note that the brightest nucleus in our sample, B2.1 0742+31, is significantly affected by pileup, as shown in the map obtained with the CIAO task pileup_map, therefore the value of 17.7± 0.4 ×10^-13 erg cm^-2 s^-1 reported in Table <ref> should be considered as a lower limit to the real flux (see Sect. <ref>). Correlation between AGN nuclear radio and X-ray emission from ROSAT All Sky Survey <cit.> has been observed and discussed in several works <cit.>. To investigate this correlation in our sample, in Fig. <ref> we plot the X-ray nuclear fluxes evaluated above versus the 3 GHz radio nuclear specific fluxes, evaluated from VLASS maps that show a discernible nuclear emission (regions N in Table <ref>). Despite the paucity of the sample, there appears to be a correlation between the nuclear emission at radio and X-ray frequencies, as evaluated through hierarchical Bayesian linear regression <cit.>. In particular, a linear regression of the logarithmic values of both quantities, including the highly piled-up B2.1 0742+31, yields a slope of 0.95_-0.35^+0.36 with a correlation coefficient of 0.75 (red line in Fig. <ref>), while excluding it yields a slope of 1.31_-0.81^+0.83 with a correlation coefficient of 0.65 (blue line in Fig. <ref>), both consistent with previous results on radio loud AGNs <cit.>. We note that the X-ray fluxes expected from the correlation that excludes B2.1 0742+31 lie at larger values than that evaluated for this source, reinforcing the point that the X-ray flux evaluated for B2.1 0742+31 should be regarded as a lower limit. As shown in Fig. <ref>, many sources in the present sample show hints of diffuse soft X-ray 0.3-3 keV band emission associated with the extended radio structures mapped by the GMRT and LOFAR images. In order to get a preliminary characterization of this diffuse emission we evaluated its flux making use of the srcflux CIAO task, assuming a thermal spectrum with 2 keV temperature and abundance 0.25 solar - as expected from typical ICM emission - and taking again into account the Galactic photo-electric absorption. The fluxes of the extended emissions are listed in Table <ref>. Also the correlation between radio and X-ray diffuse emission in clusters has been discussed in several studies <cit.>. To investigate this in our sample, in Fig. <ref> we plot these X-ray fluxes versus the 145 MHz specific fluxes evaluated from LOFAR maps from regions of extended radio emission (regions A and B listed in Table <ref>). In this case, however, the correlation between the extended emission at radio and X-ray frequencies appears very low, with a slope 0.20_-0.38^+0.36 and a correlation coefficient of 0.26. To evaluate the significance of this extended emission in the X-rays, we extracted net surface flux profiles in the 0.3-3 keV band from concentric annuli centered on the radio core positions, excluding counts from detected sources, and evaluating the background level from source-free regions of chip 7. The width of the bins was adaptively determined to reach a minimum signal to noise ratio of 3. In the outer regions, when this ratio could not be reached, we extended the bin width to the edges of the ACIS Chip. We then compared these profiles with those extracted from simulated ACIS PSFs, using the same procedure fully described in <cit.> and that we briefly summarize here. The Chandra PSFs were simulated using rays produced by the Chandra Ray Tracer (ChaRT[http://cxc.harvard.edu/ciao/PSFs/chart2/http://cxc.harvard.edu/ciao/PSFs/chart2/]) projected on the image plane by MARX[http://space.mit.edu/CXC/MARX/http://space.mit.edu/CXC/MARX/]. For each observation, we generated the average from 1000 PSF simulations centered at the coordinates of radio or X-ray cores. We then produced images of these PSFs in the 0.3-3 keV band, and extracted profiles from the same annuli used for the source images. Finally, the PSF surface flux profiles were normalized to match the level obtained in the innermost annulus for the source images. We found that the 0.3-3 keV soft-band emission is extended at least at 5 σ significance beyond 10 for 5 sources in the present sample (B2.4 0004+21, B2.1 0302+31, B2.4 0412+23, B2.1 0742+31 and B2.3 2254+35). Fig. <ref> shows the comparison of the net surface flux profiles for these sources (black dots) and their corresponding PSFs (red dots), and we see that the former are clearly extended above the latter, especially for B2.1 0302+31 and B2.3 2254+35. §.§ Individual sources In this section, we report X-ray compact features associated with radio structures the sources in our sample, while properties of the extended X-ray emission will be discussed in Sect. <ref>. The broad-band 0.3-7 keV flux maps of the central region of each source are presented in Fig. <ref>, with overlaid in black the 3 GHz VLASS contours from Fig. <ref>. §.§.§ B2.4 0004+21 Also known as NVSS J000727+220413, this source shows a typical Fanaroff-Riley II <cit.> structure, with indication of extended X-ray emission (see Fig. <ref>). Apart from the bright X-ray nucleus (region 1 in Fig. <ref>, with a 0.3-7 keV flux of 20_-2^+2×10^-14 erg cm^-2 s^-1, see Sect. <ref>), the regions of increased X-ray flux in correspondence with the radio structures (regions 2 and 3 in Fig. <ref>) have low (<2 σ) significance with respect to the level of the diffuse emission at the same radial distance from the nucleus. §.§.§ B2.2 0038+25B This source, also known as PKS 0038+255, shows a FRII radio morphology as mapped by VLASS image. We do not have a clear detection of the radio core, although between the lobes we see a faint region of X-ray emission (region 1 in Fig. <ref>) with a 0.3-7 keV flux of 5_-2^+3×10^-15 erg cm^-2 s^-1 (see Sect. <ref>), thaw we identify as the X-ray nucleus. There are some hints for this region to extend along the radio axis toward the lobes, but the low statistic prevent us from drawing further conclusions. §.§.§ B2.2 0143+24 This source, also known as NVSS J014628+250616, shows a complex, extended radio morphology in LOFAR images, while VLASS data indicate a typical FRII structure (see Fig. <ref>). In Fig. <ref>, region 1 indicates the location of the possible source nucleus. This region yields an upper limit on the flux of 6_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). On both sides of region 1, along the radio axis, there are two regions of increased flux in front of the hot-spots (regions 2 and 3). The brightest region 2 has 12 broad-band counts, and by sampling the emission at the same radial distance from the putative nucleus we conclude that this is marginally significant at 3σ level. §.§.§ B2.4 0145+22 This source, known as NVSS J014750+223852, shows in the VLASS image a FRII structure, without indications of a clearly detected radio core. In Fig. <ref>, we indicate with region 1 the position of a bright X-ray source between the two radio jets that we identify as the nucleus for which we estimated a broad-band flux of 8_-1^+1×10^-14 erg cm^-2 s^-1 (see Sect. <ref>). On the east side of the nucleus there is a region of increased X-ray flux co-spatial with the east radio jet (region 2), possibly connected with a jet knot. This region contains 12 broad-band counts, significant at 4σ level above the emission at the same radial distance from the nucleus. §.§.§ B2.4 0229+23 At a z=3.420 <cit.>, this source, known as NVSS J023220+231756, shows in the VLASS image a compact structure, coincident with a bright X-ray source that in Fig. <ref> we indicate as region 1. We identify this region as the source nucleus, for which we estimated the broad-band flux of 58_-3^+3×10^-14 erg cm^-2 s^-1 (see Sect. <ref>). No other significant structures are visible in the broad-band Chandra ACIS-S flux map. §.§.§ B2.2 0241+30 This source, known as NVSS J024443+302117, has a FRII structure, with edge-brightened radio lobes. In Fig. <ref> region 1 marks the location of the faint X-ray nucleus, coincident with the radio core, for which we estimated the broad-band flux of 5_-2^+3×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). On the south-west side of the nucleus there is a region of increased X-ray flux coincident with the south-west radio lobe (region 2). This region contains 7 broad-band counts, being marginally significant at 3σ level above the emission at the same radial distance from the nucleus. §.§.§ B2.2 0302+31 This source, also known as NVSS J030524+312928, shows evidence of significant extended X-ray emission (see Fig. <ref>), with the western radio lobe showing a FRII edge-brightened structure, while the eastern lobe appears edge-darkened as for FRI radio sources. The global radio structure, roughly connected with the X-ray diffuse emission, can be therefore classified as a Hybrid Morphology Radio Source <cit.>. In addition, the eastern radio jet appears bent in the southeast direction, more evidently in the LOFAR data (see Fig. <ref>), as observed in wide-angle tailed radio galaxies <cit.> that usually coincide with the brightest galaxy at the center of a cluster <cit.> This source shows no identifiable nuclear emission, either in the radio or in the X-ray bands. In correspondence with the western radio lobe, there is a region of increased flux (region 1 in Fig. <ref>) that, with 29 broad-band counts, significantly rises above the level of the surrounding emission at 5σ level. §.§.§ B2.4 0401+23 This source, known as NVSS J040452+240656, shows an edge-brightened FRII structure. Region 1 in Fig. <ref> marks the X-ray nucleus with a flux 4_-1^+1×10^-14 erg cm^-2 s^-1 (see Sect. <ref>). Along the radio axis there are two regions of increased X-ray flux (regions 2 and 3 in Fig. <ref>), however only region 3 with its 6 broad-band counts is marginally significant at 3σ level above the emission at the same radial distance from the nucleus. §.§.§ B2.2 0410+26 This source, also known as NVSS J041323+264916, shows a rather compact radio structure in both the LOFAR and the VLASS radio maps, where only the core is detected. Region 1 marks the faint X-ray nucleus with a 11_-3^+4×10^-15 erg cm^-2 s^-1 flux (see Sect. <ref>) in correspondence with the radio core. No other significant structures are visible in the broad-band Chandra ACIS-S flux map. §.§.§ B2.4 0412+23 This source, also known as NVSS J041512+234751, shows a radio structure elongated in the north-south direction, as shown in the LOFAR and GMRT data. The VLASS image shows the radio core and the two compact radio lobes. Apart from the bright X-ray nucleus coincident with the radio core (labeled region 1 in Fig. <ref>) with a broad-band flux of 27_-2^+2×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), B2.4 0412+23 has evidence of extended X-ray emission (see Fig. <ref>), with several regions of extended flux along the radio axis (region 2 in in Fig. <ref>), in correspondence of the radio lobes (regions 3 and 4 in in Fig. <ref>) and on the western side of the nucleus (region 5 in Fig. <ref>). These regions have however a low ≲ 2 σ significance with respect to the surrounding emission, with the exception of region 5, that with 16 broad-band counts reaches a significance of almost 6 σ. §.§.§ B2.3 0454+35 This source shows an elongated radio structure in the north-south direction. In particular the GMRT and VLASS data indicate a bright hot-spot on the north and a narrow jet in the southern direction - possibly a one sided relativistic jet - , which originates from a bright compact X-ray source (region 1 in Fig. <ref>) with a broad-band flux of 80_-3^+3×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), that we identify as the X-ray nucleus. No other radio structure (including the northern hot-spot) shows significant X-ray emission. §.§.§ B2.1 0455+32B This source, also known as NVSS J045906+323613, shows a rather compact radio structure in LOFAR and GMRT images. Its VLASS data, instead, reveal two compact radio lobes along the east-west direction. Between these two lobes there is a faint X-ray source (region 1 in Fig. <ref>) with a broad-band flux of 8_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>), possibly the nucleus of the source. The two radio lobes do not show any significant X-ray emission. §.§.§ B2.1 0455+32C The LOFAR and VLASS images of this source, also known as NVSS J045913+322607, show two radio lobes along the east-west direction, with an hint of edge-brightened FRII structure. Between the radio lobes there is a faint X-ray source (region 1 in Fig. <ref>) with a broad-band flux of 7_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>), that we identify as the source nucleus. No other radio structure shows significant X-ray emission. §.§.§ B2.3 0516+40 This source (also known as NVSS J051946+401507), shows a compact radio structure. The nuclear region (marked as 1 in Fig. <ref>) has a faint broad-band X-ray flux of 4_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). There are two compact regions of enhanced X-ray emission south-west of the nucleus (regions 2 and 3 in Fig. <ref>) that, compared to the level of the diffuse emission at the same radial distance from the nucleus, have a significance of > 3 σ and > 4 σ, respectively. However, they appear disconnected from the radio structure. §.§.§ B2.1 0536+33B This source, known as NVSS J054003+334200, shows an edge-brightened FRII structure in its VLASS image elongated in the east-west direction. Region 1 in Fig. <ref> marks the X-ray nucleus with a 29_-6^+7×10^-15 erg cm^-2 s^-1 flux (see Sect. <ref>). No other significant structures are visible in the broad-band Chandra ACIS-S flux map. §.§.§ B2.1 0549+29 VLASS data show for B2.1 0549+29, also known as NVSS J055255+293203, a rather compact radio structure, with a core and the two lobes along the east-west direction. In Fig. <ref> region 1 marks the X-ray nucleus with a 7_-1^+1×10^-14 erg cm^-2 s^-1 flux (see Sect. <ref>). Also in this case, no other significant structures are visible in the broad-band Chandra ACIS-S flux map. §.§.§ B2.1 0643+30 The radio structure of this source (also known as NVSS J064615+304123) as imaged by VLASS data is compact, showing only the core emission coincident with the X-ray nucleus emitting a 8_-1^+1×10^-14 erg cm^-2 s^-1 broad-band flux (see Sect. <ref>), marked in Fig. <ref> as region 1. Again, the broad-band Chandra ACIS-S flux map shows no other significant structures. §.§.§ B2.1 0742+31 This source (also known as NVSS J074542+314252) at a redshift 0.461 <cit.>, features a FRII edge-brightened radio structure as shown in LOFAR and VLASS data (see Fig. <ref>). This source has the brightest X-ray nucleus of the present sample (marked as region 1 in Fig. <ref>), with an estimated flux of 219_-5^+5×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), and is therefore affected by significant pileup (see Sect. <ref>). In addition, B2.1 0742+31 shows significant diffuse X-ray emission (see Fig. <ref>), both along the radio axis and across it, as shown in Fig. <ref>. In particular, there are two regions of increased X-ray flux, north of the nucleus (region 2) and in correspondence of the southeast radio lobe (region 3). Region 2, north of the nucleus and connected with the emission surrounding the latter, contains 17 broad-band counts, while region 3 contains 30 broad-band counts. Both regions are significant at 4 σ level with respect to the surrounding emission. §.§.§ B2.2 0755+24 This source (also known as NVSS J075802+242219), at a redshift 0.502 <cit.>, has a compact radio structure, where only the radio lobes are visible in VLASS data (see Fig. <ref>). In Fig. <ref> we mark as region 1 the location that we identify as the faint X-ray nucleus, with an estimated broad-band flux of 8_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). No other significant structures are visible in the broad-band Chandra ACIS-S flux map. §.§.§ B2.3 0848+34 This source (also known as J085108+341925), at a redshift of 0.697 <cit.>, shows a rather compact radio structure as imaged by LOFAR data, with a slight extension toward the south. The VLASS data reveal a slightly elongated structure, with two small lobes along the east-west direction. Between the lobes we see a compact region of increased X-ray flux (marked as 1 in Fig. <ref>) with a broad-band X-ray flux of 9_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>) that we identify as the source nucleus. Besides this nuclear region, there are no other significant structures in the broad-band Chandra ACIS-S flux map. §.§.§ B2.4 0939+22A This source (also known as NVSS J094158+214743), with a redshift of 0.572 <cit.>, has FRII radio structure, as imaged with VLASS data, with the radio axis aligned along the northeast-southwest direction. The southwestern radio lobe is bent in the northwestern direction. The radio image does not show a clear core, but there is a faint point-like region between the two radio lobes (marked as region 1 in Fig. <ref>) that we identify as the X-ray nucleus, with a broad-band flux of 9_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). In addition, there is a region of increased X-ray flux (region 2 in Fig. <ref>) coincident with the northeastern radio lobe. This region contains 25 broad-band counts, and it is therefore highly significant with respect to the emission at the same radial distance from the nucleus at 7 σ level. §.§.§ B2.4 1112+23 The VLASS data of this source, also known as NVSS J111505+232503, only show the radio core, coincident with a region (marked as region 1 in Fig. <ref>) of faint X-ray flux 6_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>) that we identify as the source nucleus. Besides the nucleus, there are no other significant structures in the broad-band Chandra ACIS-S flux map. §.§.§ B2.3 1234+37 This source, also known as NVSS J123649+365518, features two radio lobes along the northeast-southwest direction, as imaged by LOFAR and VLASS data. Between the radio lobes there is a faint X-ray source (region 1 in Fig. <ref>) with a broad-band flux of 13_-4^+5×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), that we identify as the source nucleus. No other radio structure shows significant X-ray emission. §.§.§ B2.2 1334+27 At a redshift z=3.228 <cit.>, this source, also known as NVSS J133641+270401, shows a faint extended radio structure in its LOFAR image. However, the VLASS data only reveal two compact radio lobes along the northwest-southeast direction. Between the radio lobes there is a X-ray source (region 1 in Fig. <ref>) with a broad-band flux of 9_-1^+1×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), possibly the X-ray source nucleus. No other radio structure shows significant X-ray emission. §.§.§ B2.2 1338+27 This source, also known as NVSS J134029+272326, shows two radio hot-spots along the northwest-southeast direction, as imaged by LOFAR, GMRT and VLASS data. Between the radio lobes there is a faint X-ray source (region 1 in Fig. <ref>) with a broad-band flux of <9 ×10^-15 erg cm^-2 s^-1 (see Sect. <ref>), possibly the X-ray source nucleus. We detected no significant X-ray emission in correspondence with the radio lobes. §.§.§ B2.2 1439+25 The VLASS data of this source, also known as NVSS J144204+250335, show a FRII radio structure extending along the north-south direction without revealing the radio core. No significant X-ray structure is revealed in the broad-band Chandra ACIS-S flux map. §.§.§ B2.4 1512+23 This source (also known as NVSS J151414+232711), at a redshift of 0.088 <cit.>, shows a FRII radio structure extending along the north-south direction, as imaged by VLASS data. In particular, the southern lobe appears connected to the central region with a jet-shaped structure, originating from a bright X-ray compact region (marked as 1 in Fig. <ref>) with broad-band X-ray flux of 16_-5^+5×10^-14 erg cm^-2 s^-1 (see Sect. <ref>), that we identify as the source nucleus. There are regions of enhanced X-ray emission in correspondence with the northern lobe (region 2 in Fig. <ref>) and with the southern hot-spot (region 3 in Fig. <ref>). Region 3 has a low <2 σ significance, while region 2 is significant at a 4σ level when compared with respect to the level of the diffuse emission at the same radial distance from the nucleus. §.§.§ B2.4 2054+22B This source, known as NVSS J205658+222954, shows a FRII radio structure in VLASS data, with lobes extending across the east-west direction. A faint region of enhanced X-ray emission (indicated as region 1 in Fig. <ref>) features a broad-band flux of 6_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>), that we identify as the source nucleus. North of the nucleus there is another compact region of increased X-ray flux (region 2 in Fig. <ref>) that is significant at 3σ level compared to the level of the diffuse emission at the same radial distance from the nucleus. This region, however, does not appear to be clearly connected with the radio structure. §.§.§ B2.2 2104+24 This source, known as NVSS J210621+243324, shows a FRII VLASS radio structure, with a bright X-ray nucleus (region 1 in Fig. <ref>) emitting a broad-band flux of 30_-2^+2×10^-14 erg cm^-2 s^-1 (see Sect. <ref>). Besides the nucleus, there are no other significant X-ray structures in the broad-band Chandra ACIS-S flux map. §.§.§ B2.2 2133+27 The radio structure of this source (known as NVSS J213516+271626), as imaged by VLASS data, does not have a clear shape, with the radio axis lying along the northwest-southeast direction, and without a clear radio core detection. There seems to be a region of increased X-ray flux in correspondence with the northwestern radio lobe (indicated with region 1 in Fig. <ref>), but its significance with respect to the surrounding emission is only at 2 σ level. §.§.§ B2.3 2254+35 This source (also known as NVSS J225645+354127), at a redshift of 0.114 <cit.>, has a complex radio and X-ray morphology. The VLASS data reveal the location of the radio core and an edge-darkened FRI structure, with a jet extending toward the northern direction, and the other extending toward the eastern direction. The latter jet, in particular, appears bent in the south-east direction at larger radii, as even more evident in the large scale LOFAR data (see Fig. <ref>), revealing a WAT morphology. The region marked as region 1 in Fig. <ref> is coincident with the radio core, and emits a broad-band X-ray flux of 10_-3^+4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). The X-ray emission is clearly extended (see Fig. <ref>) and shows a complex morphology with two regions of decreased flux (marked with 2 and 3 in Fig. <ref>) that look like X-ray cavities. Region 2, however, is less luminous than the emission at the same radial distance from the nucleus only at 2 σ level, while for region 3 this significance increases to ∼ 4 σ level. §.§.§ B2.2 2328+26 The radio structure of this source (known as NVSS J233032+270614) as mapped by VLASS data appears slightly elongated in the northeast-southwest direction. The region marked with 1 in Fig. <ref> indicates the point-like source that we identify as the X-ray nucleus, with a broad-band flux of 12_-0.4^+0.4×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). This is the only significant X-ray feature revealed in the broad-band Chandra ACIS-S flux map. §.§.§ B2.3 2334+39 The VLASS data of this source, also known as NVSS J233655+400546, only reveal the location of the radio core and that of the lobes, aligned along the northwest-southeast direction. The LOFAR data (see Fig. <ref>), on the other hand, indicate a FRI edge-darkened radio morphology, with the southeastern lobe bending toward the northeast direction, and the northwestern lobe bending toward the southeast direction. The region marked with 1 in Fig. <ref> indicates the faint point-like source that we identify with the X-ray nucleus, with a broad-band flux < 5 ×10^-15 erg cm^-2 s^-1 (see Sect. <ref>). There are no other significant X-ray features in the broad-band Chandra ACIS-S flux map. §.§ Spectral Analysis To characterize in more detail the sources in the present sample, we performed a spectral analysis of their nuclear emission. We extracted nuclear spectra in a 2 circular region centrered at the coordinates of the radio or X-ray core, while background spectra were extracted in source-free regions as close as possible to the nuclear extraction region to avoid vignetting effects at the CCD edge, but far enough to exclude contamination from eventual diffuse emission. We produced auxiliary response files and spectral response matrices both for the nuclear and background spectra, applying for the former point-source aperture corrections (as appropriate for point-like sources). Spectral fitting was performed in the 0.3-7 keV energy range with Sherpa application <cit.>. Due to the low counts, we performed the spectral fits by modeling the background spectra using the prescription given by <cit.>, that is, a model comprising a thermal plasma component <cit.> with solar abundances and a power-law. We instead modeled the nuclear spectra with a power-law (powerlaw) model, including photo-electric absorption (xstbabs) by the Galactic column density along the line of sight <cit.>. In addition, for the source B2.1 0742+31 we included the jdpileup model <cit.> to account for the ACIS-S detector pileup. Spectra were binned to obtain a minimum of 1 count per bin, making use of the cash statistic <cit.>. Following this procedure we were able to extract and fit nuclear spectra for 15 sources. The results of these fits are presented in Table <ref> and in Fig. <ref>. Uncertainties correspond to the 1-σ confidence level for one interesting parameter. We note that in many spectra we detect very few counts below 1 keV, as a result of the degrading Chandra effective area at low energies. We see that the intrinsic fluxes estimated from these spectral fits are compatible with those evaluated with srcflux (see Sect. <ref>), with the exceptions of B2.1 0742+31 - for which the jdpileup model estimates a pileup fraction of ∼ 20%, compatible with the value obtained from the pileup map. In addition, we notice that, while in a number of sources we find slopes Γ∼ 1.5 - 2.0 - compatible with what is observed in similar sources <cit.> - in others the spectral fit yields particularly flat slopes, indicating the possible presence of significant intrinsic absorption. To investigate the presence of intrinsic absorption, we repeated the spectral fitting of the nuclear spectra freezing the power-law slope to 1.8 and considering an additional absorption component (xsztbabs) at the source redshift or, if this measurement was not available, at redshift zero. The results of these fits are presented in Table <ref> and in Fig. <ref>. For most of the sources we are only able to put upper limits on the intrinsic absorption column, or find values compatible with the Galactic ones. For sources B2.4 0004+21, B2.4 0145+212 and B2.4 0401+23, instead, we find intrinsic absorbing columns ∼10^22 cm^-2, while for the source B2.4 0229+23 we find an additional absorbing column of ∼×10^23 cm^-2, all significantly larger than the Galactic values. As discussed in Sect. <ref>, we have 5 sources that show evidence of significant extended emission in the soft 0.3-3 keV band (see Fig. <ref>). We extracted source spectra in large elliptical regions that encompass the whole extended emission visible in the flux maps (see Fig. <ref>), excluding detected point sources as well as the 2 nuclear regions. Background spectra were extracted in the same source-free regions used for the nuclear spectral fitting. In this case, we produced spectral response matrices weighted by the count distribution within the aperture (as appropriate for extended sources). We used the same procedure adopted for the nuclear spectral fitting, that is, modeling the background spectra and using the cash statistics, with spectra binned to obtain a minimum of 1 count per bin. The sources B2.4 0004+21 and B2.4 0412+23 did not yield enough counts to allow a reasonable fit, and were therefore excluded from the following analysis. To fit the spectra of the extended emission we used a model comprising the Galactic absorption and a thermal plasma (xsapec[https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSmodelApec.htmlhttps://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSmodelApec.html]) with abundance 0.25 solar (as expected from typical ICM emission). The redshift of the thermal plasma was set at the value of the source redshift or, if this measurement was not available, at redshift zero. The results of these fits are presented in Table <ref> and in Fig. <ref>. Again, uncertainties correspond to the 1-σ confidence level for one interesting parameter. We obtain reasonable best fit temperatures between 1.9 and 2.5 keV. The diffuse emission surrounding these sources, however, can be a combination of thermal emission from hot gas of the ICM and IC/CMB. To minimize the contamination from such non-thermal emission, we repeated the spectral extraction excluding the regions of extended radio emission shown in Fig. <ref>. The results of these fits are presented in Table <ref> and in Fig. <ref>. The temperature values obtained in this way are similar to those obtained previously (although with larger uncertainties), suggesting that in these sources the contribution from non-thermal IC/CMB emission could be sub-dominant with respect to the thermal radiation arising from the ICM. Since the presence of IC/CMB may be revealed by significant X-ray emission above 2 keV <cit.>, we produced 3-7 keV hard-band flux images for the 5 sources that show evidence of significant extended emission and present them in Fig. <ref>, with radio contours drawn at 3 GHz overlaid in green. We see that the only source showing significant X-ray emission in this band in correspondence with the extended radio structures is B2.1 0302+31. In particular the hard-band emission in the east and west radio lobes are detected at 2.4σ and 3.3σ significance. Although this is conducive of the presence of non-thermal IC/CMB emission in this source, the low statistics do not allow us to draw firm conclusions. The detection of this extended X-ray emission in any case suggests the presence of ICM, indicating that these sources may belong to groups or clusters of galaxies. In the case of B2.1 0742+31, in particular, this is reinforced by the presence in the source field of 5 additional galaxies at redshift close to that of the galaxy hosting B2.1 0742+31 (see Fig. <ref>), that is, with a maximum redshift separation Δ z = 0.005 (i.e., ∼ 1500 km/s) corresponding to the maximum velocity dispersion observed in groups and clusters of galaxies <cit.>. We therefore compared the properties of the hot gas surrounding these radio sources with those observed in groups and clusters of galaxies. In particular, we are interested in the hot gas X-ray luminosity vs. temperature correlation <cit.>. Since we have redshift estimates only for B2.1 0742+31 and B2.3 2254+35, we restrict our analysis to these two sources. In Fig. <ref> we compare the temperature (kT) and X-ray bolometric luminosity L_X of the thermal gas surrounding B2.1 0742+31 and B2.3 2254+35 with those of groups and clusters of galaxies from Figure 6 of <cit.>, where the X-ray luminosities have been rescaled to the cosmology adopted in the present analysis. We see that, while the X-ray emission of B2.1 0742+31 is compatible with the ICM emission of low luminosity clusters of galaxies, the X-ray diffuse emission surrounding the highly disturbed WAT B2.3 2254+35 lies somehow at the edge of the L_X - kT relation, with a luminosity similar to those of bright groups of galaxies, and a temperature similar to those of low luminosity cluster of galaxies, possibly due the disturbed nature of the gas surrounding this WAT. Finally, we estimated the mass of the X-ray emitting gas in B2.1 0742+31 and B2.3 2254+35 from the spectral fits. From the normalization of the xsapec models (i.e., their emission measures EM), we can evaluate the gas proton density n_p. Assuming a uniform particle density in the emitting region, we have a proton density n_p = √(10^14 EM η 4 π D_A^2(1+z)^2/V) , where D_A is the angular distance of the source, V is the emitting region volume, and η≈ 0.82 is the ratio of proton to electron density in a fully ionized plasma. We can estimate the total gas mass as M_gas = μ m_u n_tot V, where m_u is the atomic mass, n_tot = n_p (1+1/η) is the total gas density, and μ=0.6 is the mean molecular weight <cit.>. To estimate the volumes we model the projected emission regions as ellipses with semi-major and semi-minor axes R and r, respectively, encompassing the diffuse X-ray and radio emission. Then, we model the emitting regions as ellipsoids with volume V=4/3 π R r^2 (when excluding the radio emission region, the volume would be the difference between the X-ray and radio emission region volumes, see Fig. <ref>). Taking into account the uncertainties on the best fit parameters and the different spectral extraction regions (that is, including or excluding the extended radio structures), we estimate M_gas = 2.6 - 9.1 ×10^12 M_ and M_gas = 0.9 - 1.4 ×10^12 M_ for B2.1 0742+31 and B2.3 2254+35, respectively, typical of rich groups <cit.>. § SUMMARY AND CONCLUSIONS In this work we have analyzed the first 33 Chandra ACIS observations obtained through the CCT snapshot campaign on the Second Bologna Catalog of radio sources. The X-ray data have been compared with 145 MHz LOFAR, 150 MHz GMRT, and 3 GHz VLASS data, to study the connection between the X-ray and radio emission in radio galaxies. The main results of this analysis can be summarized as follows: * We detected X-ray nuclear emission for 28 of 33 sources. In particular, 19 nuclei were detected at least at 3 σ significance, 7 were detected at 2 σ significance, and 2 were detected with a 1 σ marginal significance. For two other sources we were only able to put an upper limit on the nuclear flux, and for the remaining three sources we do not report any nuclear flux estimate, since we do not have any clear indication of the location of their core. * We found a mild correlation between the X-ray and radio nuclear fluxes, while the flux of diffuse X-ray emission does not appear to correlate with the radio flux of the extended radio structures. * Comparing the X-ray surface flux profiles of the sources with those of simulated PSFs, we detected extended emission with a minimum 5 σ significance level beyond 10 from the nucleus in 5 sources. * We detected 8 regions of increased X-ray flux in correspondence with radio hot-spots or jet knots at a minimum significance level of 3 σ, 2 of which above 5 σ level of significance. In B2.3 2254+35 we were able to detect a region of decreased flux, possibly associated with an X-ray cavity, at 4 σ level of significance. * We performed a X-ray spectral analysis for 15 nuclei with a power-law model, and found for the nuclei of B2.4 0004+21, B2.4 0145+22 and B2.4 0401+23 significant intrinsic absorption N_H,int∼10^22 cm^-2, and for B2.4 0229+23 N_H,int∼10^23 cm^-2. * We performed a X-ray spectral analysis of the diffuse emission surrounding 3 sources, finding temperatures of the hot plasma ∼ 2 keV. There is some hint of X-ray emission above 3 keV in correspondence with the radio lobes in B2.1 0302+31, which may suggest the presence of IC/CMB in this source. The low statistics however does not allow us to draw firm conclusions. * For two of these sources, B2.1 0742+31 and B2.3 2254+35, we compared the properties of the X-ray emitting gas with those of the ICM surrounding clusters and groups of galaxies. While the hot gas surrounding B2.1 0742+31 is compatible with the ICM of low luminosity clusters of galaxies, the X-ray diffuse emission surrounding the highly disturbed WAT B2.3 2254+35 features a luminosity similar to those of the ICM of bright groups of galaxies, while having a temperature similar to those of the ICM of low luminosity clusters of galaxies. The mass of these X-ray emitting plasmas is of the order of ∼10^12 M_, similar to those observed in the ICM of rich groups. These first results on the B2CAT CCT survey show that the low-frequency radio selection, combined with short X-ray snapshot observations, are a powerful tool to optimize the “fill-in” observing strategy of several X-ray telescopes. In particular, this proves to be particularly effective with Chandra observatory, since for XMM-Newton such short observations tend to be scheduled at the end of its orbits, which are dominated by high particle background. We thank the anonymous referee for their useful comments and suggestions. This work is supported by the “Departments of Excellence 2018 - 2022” Grant awarded by the Italian Ministry of Education, University and Research (MIUR) (L. 232/2016). This research has made use of resources provided by the Compagnia di San Paolo for the grant awarded on the BLENV project (S1618_L1_MASF_01) and by the Ministry of Education, Universities and Research for the grant MASF_FFABR_17_01. A.P. acknowledges financial support from the Consorzio Interuniversitario per la Fisica Spaziale (CIS) under the agreement related to the grant MASF_CONTR_FIN_18_02. F.M. acknowledges financial contribution from the agreement ASI-INAF n.2017-14-H.0. S.E. acknowledges the financial contribution from the contracts ASI-INAF Athena 2019-27-HH.0, “Attività di Studio per la comunità scientifica di Astrofisica delle Alte Energie e Fisica Astroparticellare” (Accordo Attuativo ASI-INAF n. 2017-14-H.0). A.P. acknowledges W. R. Forman for useful comments and suggestions. This research has made use of data obtained from the Chandra Data Archive. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa. This research has made use of the NASA/IPAC Extragalactic Database (NED; https://ned.ipac.caltech.eduhttps://ned.ipac.caltech.edu), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. SAOImageDS9 development has been made possible by funding from the Chandra X-ray Science Center (CXC), the High Energy Astrophysics Science Archive Center (HEASARC) and the JWST Mission office at Space Telescope Science Institute. LOFAR data products were provided by the LOFAR Surveys Key Science project (LSKSP; https://lofar-surveys.org/https://lofar-surveys.org/) and were derived from observations with the International LOFAR Telescope (ILT). LOFAR <cit.> is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and which are collectively operated by the ILT foundation under a joint scientific policy. The efforts of the LSKSP have benefited from funding from the European Research Council, NOVA, NWO, CNRS-INSU, the SURF Co-operative, the UK Science and Technology Funding Council and the Jülich Supercomputing Centre. The authors thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE). CIAO <cit.>, Sherpa <cit.>, ChiPS <cit.>, SAOImageDS9 <cit.>, TOPCAT <cit.>. [Ahn et al.(2012)]2012ApJS..203...21A Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, , 203, 21. doi:10.1088/0067-0049/203/2/21 [Alam et al.(2015)]2015ApJS..219...12A Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, , 219, 12. doi:10.1088/0067-0049/219/1/12 [Balmaverde et al.(2012)]2012A A...545A.143B Balmaverde, B., Capetti, A., Grandi, P., et al. 2012, , 545, A143. doi:10.1051/0004-6361/201219561 [Belsole et al.(2007)]2007MNRAS.381.1109B Belsole, E., Worrall, D. M., Hardcastle, M. J., et al. 2007, , 381, 1109. doi:10.1111/j.1365-2966.2007.12298.x [Bennett(1962)]1962MNRAS.125...75B Bennett, A. S. 1962, , 125, 75. doi:10.1093/mnras/125.1.75 [Bennett et al.(2014)]2014ApJ...794..135B Bennett, C. L., Larson, D., Weiland, J. L., et al. 2014, , 794, 135. doi:10.1088/0004-637X/794/2/135 [Bergamini et al.(1967)]1967NCimB..52..495B Bergamini, R., Londrillo, P., & Setti, G. 1967, Nuovo Cimento B Serie, 52, 495. doi:10.1007/BF02711093 [Berlind et al.(2006)]2006ApJS..167....1B Berlind, A. A., Frieman, J., Weinberg, D. H., et al. 2006, , 167, 1 [Bilicki et al.(2014)]2014ApJS..210....9B Bilicki, M., Jarrett, T. H., Peacock, J. A., et al. 2014, , 210, 9. doi:10.1088/0067-0049/210/1/9 [Braun et al.(2019)]2019arXiv191212699B Braun, R., Bonaldi, A., Bourke, T., et al. 2019, arXiv:1912.12699 [Brinkmann et al.(1997)]1997A A...319..413B Brinkmann, W., Yuan, W., & Siebert, J. 1997, , 319, 413 [Brinkmann et al.(2000)]2000A A...356..445B Brinkmann, W., Laurent-Muehleisen, S. A., Voges, W., et al. 2000, , 356, 445 [Breiding et al.(2023)]2023MNRAS.518.3222B Breiding, P., Meyer, E. T., Georganopoulos, M., et al. 2023, , 518, 3222. doi:10.1093/mnras/stac3081 [Burke et al.(2020)]2020zndo...3944985B Burke, D., Laurino, O., Wmclaugh, et al. 2020, Zenodo [Capetti et al.(2002)]2002A A...383..104C Capetti, A., Celotti, A., Chiaberge, M., et al. 2002, , 383, 104. doi:10.1051/0004-6361:20011714 [Chambers et al.(2016)]2016arXiv161205560C Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv:1612.05560. doi:10.48550/arXiv.1612.05560 [Colla et al.(1970)]1970A AS....1..281C Colla, G., Fanti, C., Ficarra, A., et al. 1970, , 1, 281 [Colla et al.(1972)]1972A AS....7....1C Colla, G., Fanti, C., Fanti, R., et al. 1972, , 7, 1 [Colla et al.(1973)]1973A AS...11..291C Colla, G., Fanti, C., Fanti, R., et al. 1973, , 11, 291 [Condon et al.(1998)]1998AJ....115.1693C Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, , 115, 1693. doi:10.1086/300337 [Crawford & Fabian(2003)]2003MNRAS.339.1163C Crawford, C. S. & Fabian, A. C. 2003, , 339, 1163. doi:10.1046/j.1365-8711.2003.06268.x [Dasadia et al.(2016)]2016MNRAS.458..681D Dasadia, S., Sun, M., Morandi, A., et al. 2016, , 458, 681. doi:10.1093/mnras/stw291 [Davis(2001)]2001ApJ...562..575D Davis, J. E. 2001, , 562, 575. doi:10.1086/323488 [de Ruiter et al.(2002)]2002A A...396..857D de Ruiter, H. R., Parma, P., Capetti, A., et al. 2002, , 396, 857. doi:10.1051/0004-6361:20021462 [Doe et al.(2007)]2007ASPC..376..543D Doe, S., Nguyen, D., Stawarz, C., et al. 2007, Astronomical Data Analysis Software and Systems XVI, 376, 543 [Eke et al.(2004)]2004MNRAS.348..866E Eke, V. R., Baugh, C. M., Cole, S., et al. 2004, , 348, 866 [Erlund et al.(2006)]2006MNRAS.371...29E Erlund, M. C., Fabian, A. C., Blundell, K. M., et al. 2006, , 371, 29. doi:10.1111/j.1365-2966.2006.10660.x [Ettori et al.(2013)]2013SSRv..177..119E Ettori, S., Donnarumma, A., Pointecouteau, E., et al. 2013, , 177, 119. doi:10.1007/s11214-013-9976-7 [Evans et al.(2006)]2006ApJ...642...96E Evans, D. A., Worrall, D. M., Hardcastle, M. J., et al. 2006, , 642, 96. doi:10.1086/500658 [Fabbiano et al.(2020)]2020ApJ...902...49F Fabbiano, G., Paggi, A., Karovska, M., et al. 2020, , 902, 49. doi:10.3847/1538-4357/abb5ad [Fabian et al.(2001)]2001MNRAS.322L..11F Fabian, A. C., Crawford, C. S., Ettori, S., et al. 2001, , 322, L11. doi:10.1046/j.1365-8711.2001.04361.x [Fabian et al.(2003)]2003MNRAS.341..729F Fabian, A. C., Sanders, J. S., Crawford, C. S., et al. 2003, , 341, 729. doi:10.1046/j.1365-8711.2003.06394.x [Fabian(2012)]2012ARA A..50..455F Fabian, A. C. 2012, , 50, 455. doi:10.1146/annurev-astro-081811-125521 [Fanti et al.(1974a)]1974A AS...18..147F Fanti, C., Fanti, R., Ficarra, A., et al. 1974, , 18, 147 [Fanti et al.(1974b)]1974A A....32..155F Fanti, R., Ficarra, A., Formiggini, L., et al. 1974, , 32, 155 [Fanti et al.(1987)]1987A AS...69...57F Fanti, C., Fanti, R., de Ruiter, H. R., et al. 1987, , 69, 57 [Fanaroff & Riley(1974)]1974MNRAS.167P..31F Fanaroff, B. L. & Riley, J. M. 1974, , 167, 31P. doi:10.1093/mnras/167.1.31P [Freeman et al.(2001)]2001SPIE.4477...76F Freeman, P., Doe, S., & Siemiginowska, A. 2001, , 4477, 76. doi:10.1117/12.447161 [Fruscione et al.(2006)]2006SPIE.6270E..1VF Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, , 6270, 62701V. doi:10.1117/12.671760 [Garon et al.(2019)]2019AJ....157..126G Garon, A. F., Rudnick, L., Wong, O. I., et al. 2019, , 157, 126. doi:10.3847/1538-3881/aaff62 [Germain et al.(2006)]2006ASPC..351...57G Germain, G., Milaszewski, R., McLaughlin, W., et al. 2006, Astronomical Data Analysis Software and Systems XV, 351, 57 [Gioia & Gregorini(1980)]1980A AS...41..329G Gioia, I. M. & Gregorini, L. 1980, , 41, 329 [Gobat et al.(2011)]2011A A...526A.133G Gobat, R., Daddi, E., Onodera, M., et al. 2011, , 526, A133. doi:10.1051/0004-6361/201016084 [Golden-Marx et al.(2021)]2021ApJ...907...65G Golden-Marx, E., Blanton, E. L., Paterno-Mahler, R., et al. 2021, , 907, 65. doi:10.3847/1538-4357/abcd96 [Gopal-Krishna & Wiita(2000)]2000A A...363..507G Gopal-Krishna & Wiita, P. J. 2000, , 363, 507 [Gursky et al.(1971)]1971ApJ...167L..81G Gursky, H., Kellogg, E., Murray, S., et al. 1971, , 167, L81. doi:10.1086/180765 [Hardcastle et al.(2004)]2004ApJ...612..729H Hardcastle, M. J., Harris, D. E., Worrall, D. M., et al. 2004, , 612, 729. doi:10.1086/422808 [Hardcastle et al.(2006)]2006MNRAS.370.1893H Hardcastle, M. J., Evans, D. A., & Croston, J. H. 2006, , 370, 1893. doi:10.1111/j.1365-2966.2006.10615.x [Hardcastle et al.(2009)]2009MNRAS.396.1929H Hardcastle, M. J., Evans, D. A., & Croston, J. H. 2009, , 396, 1929. doi:10.1111/j.1365-2966.2009.14887.x [Hardcastle et al.(2010)]2010MNRAS.401.2697H Hardcastle, M. J., Massaro, F., & Harris, D. E. 2010, , 401, 2697. doi:10.1111/j.1365-2966.2009.15855.x [Hardcastle et al.(2012)]2012MNRAS.424.1774H Hardcastle, M. J., Massaro, F., Harris, D. E., et al. 2012, , 424, 1774. doi:10.1111/j.1365-2966.2012.21247.x [Harris & Grindlay(1979)]1979MNRAS.188...25H Harris, D. E. & Grindlay, J. E. 1979, , 188, 25. doi:10.1093/mnras/188.1.25 [Harris et al.(1980)]1980A AS...42..319H Harris, D. E., Lari, C., Vallee, J. P., et al. 1980, , 42, 319 [HI4PI Collaboration et al.(2016)]2016A A...594A.116H HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, , 594, A116. doi:10.1051/0004-6361/201629178 [Helsdon & Ponman(2000)]2000MNRAS.315..356H Helsdon, S. F. & Ponman, T. J. 2000, , 315, 356. doi:10.1046/j.1365-8711.2000.03396.x [Hoyle(1965)]1965Natur.208..111H Hoyle, F. 1965, , 208, 111. doi:10.1038/208111a0 [Ineson et al.(2013)]2013ApJ...770..136I Ineson, J., Croston, J. H., Hardcastle, M. J., et al. 2013, , 770, 136. doi:10.1088/0004-637X/770/2/136 [Ineson et al.(2015)]2015MNRAS.453.2682I Ineson, J., Croston, J. H., Hardcastle, M. J., et al. 2015, , 453, 2682. doi:10.1093/mnras/stv1807 [Intema et al.(2017)]2017A A...598A..78I Intema, H. T., Jagannathan, P., Mooley, K. P., et al. 2017, , 598, A78. doi:10.1051/0004-6361/201628536 [Jimenez-Gallardo et al.(2020)]2020ApJS..250....7J Jimenez-Gallardo, A., Massaro, F., Prieto, M. A., et al. 2020, , 250, 7. doi:10.3847/1538-4365/aba5a0 [Jimenez-Gallardo et al.(2021)]2021ApJS..252...31J Jimenez-Gallardo, A., Massaro, F., Paggi, A., et al. 2021, , 252, 31. doi:10.3847/1538-4365/abcecd [Jimenez-Gallardo et al.(2022)]2022ApJ...941..114J Jimenez-Gallardo, A., Sani, E., Ricci, F., et al. 2022, , 941, 114. doi:10.3847/1538-4357/aca08b [Jones et al.(1979)]1979ApJ...234L..21J Jones, C., Mandel, E., Schwarz, J., et al. 1979, , 234, L21. doi:10.1086/183102 [Joye & Mandel(2003)]2003ASPC..295..489J Joye, W. A. & Mandel, E. 2003, Astronomical Data Analysis Software and Systems XII, 295, 489 [Kaastra (1992)]kasstra1992 Kaastra, J.S. 1992, An X-Ray Spectral Code for Optically Thin Plasmas (Internal SRON-Leiden Report, updated version 2.0) [Kataoka & Stawarz(2005)]2005ApJ...622..797K Kataoka, J. & Stawarz, Ł. 2005, , 622, 797. doi:10.1086/428083 [Kelly(2007)]2007ApJ...665.1489K Kelly, B. C. 2007, , 665, 1489. doi:10.1086/519947 [Kraft et al.(2012)]2012ApJ...749...19K Kraft, R. P., Birkinshaw, M., Nulsen, P. E. J., et al. 2012, , 749, 19. doi:10.1088/0004-637X/749/1/19 [Krezinger et al.(2020)]2020Symm...12..527K Krezinger, M., Frey, S., Paragi, Z., et al. 2020, Symmetry, 12, 527. doi:10.3390/sym12040527 [Kerr & Lynden-Bell(1986)]1986MNRAS.221.1023K Kerr, F. J. & Lynden-Bell, D. 1986, , 221, 1023. doi:10.1093/mnras/221.4.1023 [Lacy et al.(2020)]2020PASP..132c5001L Lacy, M., Baum, S. A., Chandler, C. J., et al. 2020, , 132, 035001. doi:10.1088/1538-3873/ab63eb [Lane et al.(2014)]2014MNRAS.440..327L Lane, W. M., Cotton, W. D., van Velzen, S., et al. 2014, , 440, 327. doi:10.1093/mnras/stu256 [Law-Green et al.(1995)]1995MNRAS.274..939L Law-Green, J. D. B., Leahy, J. P., Alexander, P., et al. 1995, , 274, 939. doi:10.1093/mnras/274.3.939 [Leahy(1993)]1993LNP...421....1L Leahy, J. P. 1993, Jets in Extragalactic Radio Sources, 1. doi:10.1007/3-540-57164-7_74 [Mannering et al.(2013)]2013MNRAS.431..858M Mannering, E., Worrall, D. M., & Birkinshaw, M. 2013, , 431, 858. doi:10.1093/mnras/stt215 [Markevitch et al.(2003)]2003ApJ...583...70M Markevitch, M., Bautz, M. W., Biller, B., et al. 2003, , 583, 70. doi:10.1086/345347 [Maselli et al.(2018)]2018A A...619A..75M Maselli, A., Kraft, R. P., Massaro, F., et al. 2018, , 619, A75. doi:10.1051/0004-6361/201833332 [Massaro et al.(2009a)]2009ApJ...696..980M Massaro, F., Harris, D. E., Chiaberge, M., et al. 2009, , 696, 980. doi:10.1088/0004-637X/696/1/980 [Massaro et al.(2009b)]2009ApJ...692L.123M Massaro, F., Chiaberge, M., Grandi, P., et al. 2009, , 692, L123. doi:10.1088/0004-637X/692/2/L123 [Massaro et al.(2010)]2010ApJ...714..589M Massaro, F., Harris, D. E., Tremblay, G. R., et al. 2010, , 714, 589. doi:10.1088/0004-637X/714/1/589 [Massaro et al.(2011)]2011ApJS..197...24M Massaro, F., Harris, D. E., & Cheung, C. C. 2011, , 197, 24. doi:10.1088/0067-0049/197/2/24 [Massaro et al.(2012)]2012ApJS..203...31M Massaro, F., Tremblay, G. R., Harris, D. E., et al. 2012, , 203, 31. doi:10.1088/0067-0049/203/2/31 [Massaro et al.(2013)]2013ApJS..206....7M Massaro, F., Harris, D. E., Tremblay, G. R., et al. 2013, , 206, 7. doi:10.1088/0067-0049/206/1/7 [Massaro et al.(2015)]2015ApJS..220....5M Massaro, F., Harris, D. E., Liuzzo, E., et al. 2015, , 220, 5. doi:10.1088/0067-0049/220/1/5 [Massaro et al.(2018)]2018ApJS..234....7M Massaro, F., Missaglia, V., Stuardi, C., et al. 2018, , 234, 7. doi:10.3847/1538-4365/aa8e9d [Mernier et al.(2022)]2022arXiv220710092M Mernier, F., Werner, N., Bagchi, J., et al. 2022, arXiv:2207.10092. doi:10.48550/arXiv.2207.10092 [Meyer et al.(2019)]2019ApJ...883L...2M Meyer, E. T., Iyer, A. R., Reddy, K., et al. 2019, , 883, L2. doi:10.3847/2041-8213/ab3db3 [Mingo et al.(2014)]2014MNRAS.440..269M Mingo, B., Hardcastle, M. J., Croston, J. H., et al. 2014, , 440, 269. doi:10.1093/mnras/stu263 [Mingo et al.(2017)]2017MNRAS.470.2762M Mingo, B., Hardcastle, M. J., Ineson, J., et al. 2017, , 470, 2762. doi:10.1093/mnras/stx1307 [Missaglia et al.(2019)]2019A A...626A...8M Missaglia, V., Massaro, F., Capetti, A., et al. 2019, , 626, A8. doi:10.1051/0004-6361/201935058 [Missaglia et al.(2021)]2021ApJS..255...18M Missaglia, V., Massaro, F., Liuzzo, E., et al. 2021, , 255, 18. doi:10.3847/1538-4365/ac00b6 [Moore et al.(1993)]1993MNRAS.261..827M Moore, B., Frenk, C. S., & White, S. D. M. 1993, , 261, 827 [Mulchaey(2000)]2000ARA A..38..289M Mulchaey, J. S. 2000, , 38, 289. doi:10.1146/annurev.astro.38.1.289 [Okoye(1972)]1972MNRAS.160..339O Okoye, S. E. 1972, , 160, 339. doi:10.1093/mnras/160.3.339 [Okoye(1973)]1973MNRAS.165..413O Okoye, S. E. 1973, , 165, 413. doi:10.1093/mnras/165.4.413 [Orienti et al.(2012)]2012MNRAS.419.2338O Orienti, M., Prieto, M. A., Brunetti, G., et al. 2012, , 419, 2338. doi:10.1111/j.1365-2966.2011.19882.x [Owen & Rudnick(1976)]1976ApJ...205L...1O Owen, F. N. & Rudnick, L. 1976, , 205, L1. doi:10.1086/182077 [Padrielli et al.(1981)]1981A AS...46..473P Padrielli, L., Kapahi, V. K., & Katgert-Merkelijn, J. K. 1981, , 46, 473 [Paggi et al.(2021)]2021A A...647A..79P Paggi, A., Massaro, F., Peña-Herazo, H. A., et al. 2021, , 647, A79. doi:10.1051/0004-6361/202039813 [Parekh et al.(2017)]2017MNRAS.464.2752P Parekh, V., Dwarakanath, K. S., Kale, R., et al. 2017, , 464, 2752. doi:10.1093/mnras/stw2521 [Paul et al.(2022)]2022arXiv221101393P Paul, S., Kale, R., Datta, A., et al. 2022, arXiv:2211.01393 [Ricci et al.(2018)]2018ApJ...867...35R Ricci, F., Lovisari, L., Kraft, R. P., et al. 2018, , 867, 35. doi:10.3847/1538-4357/aae487 [Rogora et al.(1986)]1986A AS...64..557R Rogora, A., Padrielli, L., & de Ruiter, H. R. 1986, , 64, 557 [Sabater et al.(2021)]2021A A...648A...2S Sabater, J., Best, P. N., Tasse, C., et al. 2021, , 648, A2. doi:10.1051/0004-6361/202038828 [Saripalli & Roberts(2018)]2018ApJ...852...48S Saripalli, L. & Roberts, D. H. 2018, , 852, 48. doi:10.3847/1538-4357/aa9c4b [Scharf et al.(2003)]2003ApJ...596..105S Scharf, C., Smail, I., Ivison, R., et al. 2003, , 596, 105. doi:10.1086/377531 [Schwartz et al.(2000)]2000ApJ...540L..69S Schwartz, D. A., Marshall, H. L., Lovell, J. E. J., et al. 2000, , 540, 69. doi:10.1086/312875 [Shimwell et al.(2017)]2017A A...598A.104S Shimwell, T. W., Röttgering, H. J. A., Best, P. N., et al. 2017, , 598, A104. doi:10.1051/0004-6361/201629313 [Shimwell et al.(2019)]2019A A...622A...1S Shimwell, T. W., Tasse, C., Hardcastle, M. J., et al. 2019, , 622, A1. doi:10.1051/0004-6361/201833559 [Shimwell et al.(2022)]2022A A...659A...1S Shimwell, T. W., Hardcastle, M. J., Tasse, C., et al. 2022, , 659, A1. doi:10.1051/0004-6361/202142484 [Smail et al.(2009)]2009ApJ...702L.114S Smail, I., Lehmer, B. D., Ivison, R. J., et al. 2009, , 702, L114. doi:10.1088/0004-637X/702/2/L114 [Smail et al.(2012)]2012ApJ...760..132S Smail, I., Blundell, K. M., Lehmer, B. D., et al. 2012, , 760, 132. doi:10.1088/0004-637X/760/2/132 [Snellen et al.(2002)]2002MNRAS.329..700S Snellen, I. A. G., McMahon, R. G., Hook, I. M., et al. 2002, , 329, 700. doi:10.1046/j.1365-8711.2002.05049.x [Spinrad et al.(1985)]1985PASP...97..932S Spinrad, H., Djorgovski, S., Marr, J., et al. 1985, , 97, 932. doi:10.1086/131647 [Stuardi et al.(2018)]2018ApJS..235...32S Stuardi, C., Missaglia, V., Massaro, F., et al. 2018, , 235, 32. doi:10.3847/1538-4365/aaafcf [Tasse et al.(2021)]2021A A...648A...1T Tasse, C., Shimwell, T., Hardcastle, M. J., et al. 2021, , 648, A1. doi:10.1051/0004-6361/202038804 [Tavecchio et al.(2000)]2000ApJ...544L..23T Tavecchio, F., Maraschi, L., Sambruna, R. M., et al. 2000, , 544, L23. doi:10.1086/317292 [Taylor(2005)]2005ASPC..347...29T Taylor, M. B. 2005, Astronomical Data Analysis Software and Systems XIV, 347, 29 [van Haarlem et al.(2013)]2013A A...556A...2V van Haarlem, M. P., Wise, M. W., Gunst, A. W., et al. 2013, , 556, A2. doi:10.1051/0004-6361/201220873 [van Weeren et al.(2019)]2019SSRv..215...16V van Weeren, R. J., de Gasperin, F., Akamatsu, H., et al. 2019, , 215, 16. doi:10.1007/s11214-019-0584-z [Voges et al.(1999)]1999A A...349..389V Voges, W., Aschenbach, B., Boller, T., et al. 1999, , 349, 389. doi:10.48550/arXiv.astro-ph/9909315 [Worrall & Birkinshaw(1994)]1994ApJ...427..134W Worrall, D. M. & Birkinshaw, M. 1994, , 427, 134. doi:10.1086/174126 [Worrall(2002)]2002NewAR..46..121W Worrall, D. M. 2002, , 46, 121. doi:10.1016/S1387-6473(01)00167-1 [Worrall(2009)]2009A ARv..17....1W Worrall, D. M. 2009, , 17, 1. doi:10.1007/s00159-008-0016-7 [Wu et al.(1999)]1999ApJ...524...22W Wu, X.-P., Xue, Y.-J., & Fang, L.-Z. 1999, , 524, 22. doi:10.1086/307791 [Xue & Wu(2000)]2000ApJ...538...65X Xue, Y.-J. & Wu, X.-P. 2000, , 538, 65. doi:10.1086/309116 [Zuther et al.(2012)]2012A A...543A..57Z Zuther, J., Fischer, S., & Eckart, A. 2012, , 543, A57. doi:10.1051/0004-6361/201118200 § RADIO MAPS In this appendix we report the available radio images for the sources considered in this work, that is the 74 MHz VLSSR, 145 MHz LOFAR, 150 MHz GMRT TGSS, 1.4 GHz NVSS, and 3 GHz VLASS maps. The radio maps for each source are presented in Fig. <ref>, where we overplot to the maps white dashed ellipses indicating the different radio structures, generally the two radio lobes (indicated with A and B) and radio core (indicated with N). When no radio structure appears discernible, only one ellipse (indicated with A) marks the bulk emission. In Table <ref> we report the specific flux estimates (in mJy) for the various structures observed in the radio maps. For each specific flux estimate we report an error that includes both the statistical and the systematic uncertainty, the latter ranging between 10% and 15% of the specific flux <cit.>.
http://arxiv.org/abs/2307.01215v1
20230701041102
Functional Donoho-Stark Approximate Support Uncertainty Principle
[ "K. Mahesh Krishna" ]
math.FA
[ "math.FA", "cs.IT", "math.IT", "42C15, 46B03, 46B04" ]
FUNCTIONAL DONOHO-STARK APPROXIMATE SUPPORT UNCERTAINTY PRINCIPLE K. MAHESH KRISHNA Post Doctoral Fellow Statistics and Mathematics Unit Indian Statistical Institute, Bangalore Centre Karnataka 560 059, India Email: [email protected] Date: August 1, 2023 Abstract: Let ({f_j}_j=1^n, {τ_j}_j=1^n) and ({g_k}_k=1^n, {ω_k}_k=1^n) be two p-orthonormal bases for a finite dimensional Banach space 𝒳. If x ∈𝒳∖{0} is such that θ_fx is ε-supported on M⊆{1,…, n} w.r.t. p-norm and θ_gx is δ-supported on N⊆{1,…, n} w.r.t. p-norm, then we show that o(M)^1/po(N)^1/q≥1/max_1≤ j,k≤ n|f_j(ω_k) |max{1-ε-δ, 0}, o(M)^1/qo(N)^1/p≥1/max_1≤ j,k≤ n|g_k(τ_j) |max{1-ε-δ, 0}, where θ_f: 𝒳∋ x ↦ (f_j(x) )_j=1^n ∈ℓ^p([n]); θ_g: 𝒳∋ x ↦ (g_k(x) )_k=1^n ∈ℓ^p([n]) and q is the conjugate index of p. We call Inequalities (<ref>) and (<ref>) as Functional Donoho-Stark Approximate Support Uncertainty Principle. Inequalities (<ref>) and (<ref>) improve the finite approximate support uncertainty principle obtained by Donoho and Stark [SIAM J. Appl. Math., 1989]. Keywords: Uncertainty Principle, Orthonormal Basis, Hilbert space, Banach space. Mathematics Subject Classification (2020): 42C15, 46B03, 46B04. § INTRODUCTION Let 0≤ε <1. Recall that a function f ∈ℒ^2 (ℝ^d) is said to be ε-supported on a measurable subset E⊆ℝ^d (also known as ε-approximately supported as well as ε-essentially supported) <cit.> if (∫_E^c |f(x)|^2 dx )^1/2≤ε(∫_ℝ^d |f(x)|^2 dx)^1/2. Let d ∈ℕ and :ℒ^2 (ℝ^d) →ℒ^2 (ℝ^d) be the unitary Fourier transform obtained by extending uniquely the bounded linear operator :ℒ^1 (ℝ^d)∩ℒ^2 (ℝ^d) ∋ f ↦f∈ C_0(ℝ^d); f: ℝ^d ∋ξ↦f(ξ)∫_ℝ^d f(x)e^-2π i ⟨ x, ξ⟩ dx ∈ℂ. In 1989, Donoho and Stark derived the following uncertainty principle on approximate supports of function and its Fourier transform <cit.>. <cit.> (Donoho-Stark Approximate Support Uncertainty Principle) If f ∈ℒ^2 (ℝ^d)∖{0} is ε-supported on a measurable subset E⊆ℝ^d and f is δ-supported on a measurable subset F⊆ℝ^d, then m(E)m(F)≥ (1-ε-δ)^2. Ultimate result in <cit.> is the finite dimensional Heisenberg uncertainty principle known today as Donoho-Stark uncertainty principle. It is then natural to seek a finite dimensional version of Theorem <ref>. For this, first one needs the notion of approximate support in finite dimensions. Donoho and Stark defined this notion as follows. For h ∈ℂ^d, let h_0 be the number of nonzero entries in h. Let : ℂ^d →ℂ^d be the Fourier transform. Given a subset M⊆{1, …, n}, the number of elements in M is denoted by o(M). <cit.> Let 0≤ε <1. A vector (a_j)_j=1^d∈ℂ^d is said to be ε-supported on a subset M⊆{1,…, d} if (∑_j∈ M^c|a_j|^2)^1/2≤ε(∑_j=1^d|a_j|^2)^1/2. Finite dimensional version of Theorem <ref> then reads as follows. <cit.> (Finite Donoho-Stark Approximate Support Uncertainty Principle) If h ∈ℂ^d∖{0} is ε-supported on M⊆{1,…, d} and h is δ-supported on N⊆{1,…, d}, then o(M)o(N)≥ d (1-ε-δ)^2. In 1990, Smith <cit.> generalized Theorem <ref> to Fourier transforms defined on locally compact abelian groups. Recently, Banach space version of finite Donoho-Stark uncertainty principle has been derived in <cit.>. Therefore we seek a Banach space version of Theorem <ref>. This we obtain in this paper. § FUNCTIONAL DONOHO-STARK APPROXIMATE SUPPORT UNCERTAINTY PRINCIPLE In the paper, 𝕂 denotes ℂ or ℝ and 𝒳 denotes a finite dimensional Banach space over 𝕂. Identity operator on 𝒳 is denoted by I_𝒳. Dual of 𝒳 is denoted by 𝒳^*. Whenever 1<p<∞, q denotes the conjugate index of p. For d ∈ℕ, the standard finite dimensional Banach space 𝕂^d over 𝕂 equipped with standard ·_p norm is denoted by ℓ^p([d]). Canonical basis for 𝕂^d is denoted by {e_j}_j=1^d and {ζ_j}_j=1^d be the coordinate functionals associated with {e_j}_j=1^d. <cit.> Let 𝒳 be a finite dimensional Banach space over 𝕂. Let {τ_j}_j=1^n be a basis for 𝒳 and let {f_j}_j=1^n be the coordinate functionals associated with {τ_j}_j=1^n. The pair ({f_j}_j=1^n, {τ_j}_j=1^n) is said to be a p-orthonormal basis (1<p <∞) for 𝒳 if the following conditions hold. * f_j=τ_j=1 for all 1≤ j≤ n. * For every (a_j)_j=1^n ∈𝕂^n, ∑_j=1^na_jτ_j =(∑_j=1^n|a_j|^p)^1/p. Given a p-orthonormal basis ({f_j}_j=1^n, {τ_j}_j=1^n) for 𝒳, we get the following two invertible isometries: θ_f: 𝒳∋ x ↦ (f_j(x))_j=1^n ∈ℓ^p([n]), θ_τ :ℓ^p([n])∋ (a_j)_j=1^n ↦∑_j=1^na_j τ_j ∈𝒳. Then we have the following proposition. Let ({f_j}_j=1^n, {τ_j}_j=1^n) be a p-orthonormal basis for 𝒳. Then * θ_f is an invertible isometry. * θ_τ is an invertible isometry. * θ_τθ_f=I_𝒳. It is natural to guess the following version of Definition <ref> for ℓ^p([n]). Let 0≤ε <1. A vector (a_j)_j=1^n∈ℓ^p([n]) is said to be ε-supported on a subset M⊆{1.…, n} w.r.t. p-norm if (∑_j∈ M^c|a_j|^p)^1/p≤ε(∑_j=1^n|a_j|^p)^1/p. With the above definition we have following theorem. (Functional Donoho-Stark Approximate Support Uncertainty Principle) Let ({f_j}_j=1^n, {τ_j}_j=1^n) and ({g_k}_k=1^n, {ω_k}_k=1^n) be two p-orthonormal bases for a finite dimensional Banach space 𝒳. If x ∈𝒳∖{0} is such that θ_fx is ε-supported on M⊆{1,…, n} w.r.t. p-norm and θ_gx is δ-supported on N⊆{1,…, n} w.r.t. p-norm, then o(M)^1/po(N)^1/q≥1/max_1≤ j,k≤ n|f_j(ω_k) |max{1-ε-δ, 0}, o(M)^1/qo(N)^1/p≥1/max_1≤ j,k≤ n|g_k(τ_j) |max{1-ε-δ,0}. For S⊆{1, …, n}, define P_S: ℓ^p([n]) ∋ (a_j)_j=1^n ↦∑_j∈ S a_j e _j ∈ℓ^p([n]) be the canonical projection onto the coordinates indexed by S. Now define V P_Mθ_f θ_ω P_N: ℓ^p([n]) →ℓ^p([n]). Then for z ∈ℓ^p([n]), Vz^p= P_Mθ_f θ_ω P_Nz^p= P_Mθ_f θ_ω P_N(∑_k=1^nζ_k(z)e_k)^p= P_Mθ_f θ_ω(∑_k=1^nζ_k(z)P_Ne_k)^p = P_Mθ_f θ_ω(∑_k∈ Nζ_k(z)e_k)^p= P_Mθ_f (∑_k∈ Nζ_k(z)θ_ω e_k)^p= P_Mθ_f (∑_k∈ Nζ_k(z)ω_k)^p =∑_k∈ Nζ_k(z)P_Mθ_f ω_k^p=∑_k∈ Nζ_k(z)P_M(∑_j=1^nf_j(ω_k)e_j)^p=∑_k∈ Nζ_k(z)∑_j=1^nf_j(ω_k)P_Me_j^p =∑_k∈ Nζ_k(z)∑_j∈ Mf_j(ω_k)e_j^p=∑_j∈ M(∑_k∈ Nζ_k(z)f_j(ω_k))e_j^p=∑_j∈ M|∑_k∈ Nζ_k(z)f_j(ω_k)|^p ≤∑_j∈ M(∑_k∈ N| ζ_k(z)f_j(ω_k)|)^p≤(max_1≤ j,k≤ n|f_j(ω_k) |)^p∑_j∈ M(∑_k∈ N| ζ_k(z)|)^p =(max_1≤ j,k≤ n|f_j(ω_k) |)^po(M)(∑_k∈ N| ζ_k(z)|)^p≤(max_1≤ j,k≤ n|f_j(ω_k) |)^po(M)(∑_k∈ N| ζ_k(z)|^p)^p/p(∑_k∈ N1^q)^p/q =(max_1≤ j,k≤ n|f_j(ω_k) |)^po(M)(∑_k∈ N| ζ_k(z)|^p)^p/po(N)^p/q≤(max_1≤ j,k≤ n|f_j(ω_k) |)^po(M)(∑_k=1^n| ζ_k(z)|^p)^p/po(N)^p/q =(max_1≤ j,k≤ n|f_j(ω_k) |)^po(M)z^po(N)^p/q. Therefore V≤(max_1≤ j,k≤ n|f_j(ω_k) |)o(M)^1/po(N)^1/q. We now wish to find a lower bound on the operator norm of V. For x∈𝒳, we find θ_fx-Vθ_gx ≤θ_fx-P_Mθ_fx+P_Mθ_fx-Vθ_gx≤εθ_fx+P_Mθ_fx-Vθ_gx =εθ_fx+P_Mθ_fx-P_Mθ_f θ_ω P_Nθ_gx=εθ_fx+P_Mθ_f(x-θ_ω P_Nθ_gx) ≤εθ_fx+x-θ_ω P_Nθ_gx= εθ_fx+θ_ωθ_gx-θ_ω P_Nθ_gx =εθ_fx+θ_ω (θ_gx- P_Nθ_gx)=εθ_fx+θ_gx- P_Nθ_gx ≤εθ_fx+δθ_gx=εx+δx=(ε +δ)x. Using triangle inequality, we then get x-Vθ_gx= θ_fx-Vθ_gx≤θ_fx-Vθ_gx≤ (ε +δ)x, ∀ x ∈𝒳. Since θ_g is an invertible isometry, (1-ε-δ) x≤Vθ_gx, ∀ x ∈𝒳 (1-ε-δ) y=(1-ε-δ) θ_g^-1y≤Vy, ∀ y ∈ℓ^p([n]), i.e., max{1-ε-δ,0}≤V. Using Inequalities (<ref>) and (<ref>) we get max{1-ε-δ,0}≤(max_1≤ j,k≤ n|f_j(ω_k) |)o(M)^1/po(N)^1/q. To prove second inequality, define W P_Nθ_g θ_τ P_M: ℓ^p([n]) →ℓ^p([n]). Then for z ∈ℓ^p([n]), Wz^p= P_Nθ_g θ_τ P_Mz^p= P_Nθ_g θ_τ P_M(∑_j=1^nζ_j(z)e_j)^p= P_Nθ_g θ_τ(∑_j=1^nζ_j(z)P_Me_j)^p = P_Nθ_g θ_τ(∑_j∈ Mζ_j(z)e_j)^p= P_Nθ_g (∑_j∈ Mζ_j(z)θ_τ e_j)^p= P_Nθ_g (∑_j∈ Mζ_j(z)τ_j)^p =∑_j∈ Mζ_j(z)P_Nθ_gτ_j^p=∑_j∈ Mζ_j(z)P_N(∑_k=1^ng_k(τ_j)e_k)^p=∑_j∈ Mζ_j(z)∑_k=1^ng_k(τ_j)P_Ne_k^p =∑_j∈ Mζ_j(z)∑_k∈ Ng_k(τ_j)e_k^p=∑_k∈ N(∑_j∈ Mζ_j(z)g_k(τ_j))e_k^p= ∑_k∈ N|∑_j∈ Mζ_j(z)g_k(τ_j)|^p ≤∑_k∈ N(∑_j∈ M|ζ_j(z)g_k(τ_j)|)^p≤(max_1≤ j,k≤ n|g_k(τ_j) |)^p∑_k∈ N(∑_j∈ M|ζ_j(z)|)^p =(max_1≤ j,k≤ n|g_k(τ_j) |)^po(N)(∑_j∈ M|ζ_j(z)|)^p≤(max_1≤ j,k≤ n|g_k(τ_j) |)^po(N)(∑_j∈ M|ζ_j(z)|^p)^p/p(∑_j∈ M1^q)^p/q =(max_1≤ j,k≤ n|g_k(τ_j) |)^po(N)(∑_j∈ M|ζ_j(z)|^p)^p/po(M)^p/q≤(max_1≤ j,k≤ n|g_k(τ_j) |)^po(N)(∑_j=1^n|ζ_j(z)|^p)^p/po(M)^p/q =(max_1≤ j,k≤ n|g_k(τ_j) |)^po(N)z^po(M)^p/q. Therefore W≤(max_1≤ j,k≤ n|g_k(τ_j) |)o(M)^1/qo(N)^1/p. Now for x∈𝒳, θ_gx-Wθ_fx ≤θ_gx-P_Nθ_gx+P_Nθ_gx-Wθ_fx≤δθ_gx+P_Nθ_gx-Wθ_fx =δθ_gx+P_Nθ_gx-P_Nθ_g θ_τ P_Mθ_fx=δθ_gx+P_Nθ_g(x-θ_τ P_Mθ_fx) ≤δθ_gx+x-θ_τ P_Mθ_fx= δθ_gx+θ_τθ_fx-θ_τ P_Mθ_fx =δθ_gx+θ_τ (θ_fx- P_Mθ_fx)=δθ_gx+θ_fx- P_Mθ_fx ≤δθ_gx+εθ_fx=δx+εx=(δ+ε)x. Using triangle inequality and the fact that θ_f is an invertible isometry we then get max{1-ε-δ,0}≤W. Using Inequalities (<ref>) and (<ref>) we get max{1-ε-δ, 0}≤(max_1≤ j,k≤ n|g_k(τ_j) |)o(M)^1/qo(N)^1/p. Let {τ_j}_j=1^n and {ω_j}_j=1^n be two orthonormal bases for a finite dimensional Hilbert space ℋ. Set θ_τ: ℋ∋ h ↦ (⟨ h, τ_j⟩)_j=1^n ∈ℂ^n, θ_ω: ℋ∋ h ↦ (⟨ h, ω_j⟩)_j=1^n ∈ℂ^n. If h ∈ℋ∖{0} is such that θ_τ h is ε-supported on M⊆{1,…, n} and θ_ω h is δ-supported on N⊆{1,…, n}, then o(M)o(N)≥1/max_1≤ j,k≤ n|⟨τ_j, ω_k ⟩ |^2 (1-ε-δ)^2. In particular, Theorem <ref> follows from Theorem <ref>. Define f_j:ℋ∋ h ↦⟨ h, τ_j ⟩∈𝕂; g_j:ℋ∋ h ↦⟨ h, ω_j ⟩∈𝕂, ∀ 1≤ j≤ n. Then p=q=2 and |f_j(ω_k)|=|⟨ω_k, τ_j ⟩ | for all 1≤ j, k ≤ n. Theorem <ref> follows by taking {τ_j}_j=1^n as the standard basis and {ω_j}_j=1^n as the Fourier basis for ℂ^n. Let ({f_j}_j=1^n, {τ_j}_j=1^n) and ({g_k}_k=1^n, {ω_k}_k=1^n) be two p-orthonormal bases for a finite dimensional Banach space 𝒳. Let x ∈𝒳∖{0} is such that θ_fx is ε-supported on M⊆{1,…, n} w.r.t. p-norm and θ_gx is δ-supported on N⊆{1,…, n} w.r.t. p-norm. If ε+δ≤ 1, then o(M)^1/po(N)^1/q≥1/max_1≤ j,k≤ n|f_j(ω_k) |(1-ε-δ), o(M)^1/qo(N)^1/p≥1/max_1≤ j,k≤ n|g_k(τ_j) |(1-ε-δ). Let ({f_j}_j=1^n, {τ_j}_j=1^n) and ({g_k}_k=1^n, {ω_k}_k=1^n) be two p-orthonormal bases for a finite dimensional Banach space 𝒳. If x ∈𝒳∖{0} is such that θ_fx is 0-supported on M⊆{1,…, n} w.r.t. p-norm and θ_gx is 0-supported on N⊆{1,…, n} w.r.t. p-norm (saying differently, θ_fx is supported on M and θ_gx is supported on N), then o(M)^1/po(N)^1/q≥1/max_1≤ j,k≤ n|f_j(ω_k) |, o(M)^1/qo(N)^1/p≥1/max_1≤ j,k≤ n|g_k(τ_j) |. Corollary <ref> is not the Theorem 2.3 in <cit.> (it is a particular case) because Theorem 2.3 in <cit.> is derived for p-Schauder frames which is general than p-orthonormal bases. Theorem <ref> promotes the following question. Given p and a Banach space 𝒳 of dimension n, for which pairs of p-orthonormal bases ({f_j}_j=1^n, {τ_j}_j=1^n), ({g_k}_k=1^n, {ω_k}_k=1^n) for 𝒳, subsets M,N and ε, δ, we have equality in Inequalities (<ref>) and (<ref>)? Observe that we used 1<p<∞ in the proof of Theorem <ref>. Therefore we have the following problem. Whether there are Functional Donoho-Stark Approximate Support Uncertainty Principle (versions of Theorem <ref>) for 1-orthonormal bases and ∞-orthonormal bases? Keeping ℓ^p-spaces for 0<p<1 as a model space equipped with (a_j)_j=1^n_p∑_j=1^n|a_j|^p, ∀ (a_j)_j=1^n ∈𝕂^n, we set following definitions. Let 𝒳 be a vector space over 𝕂. We say that 𝒳 is a disc-Banach space if there exists a map called as disc-norm ·:𝒳→ [0, ∞) satisfying the following conditions. * If x ∈𝒳 is such that x=0, then x=0. * x+y≤x+y for all x, y ∈𝒳. * λ x≤ |λ|x for all x ∈𝒳 and for all λ∈𝕂 with |λ|≥ 1. * λ x≥ |λ|x for all x ∈𝒳 and for all λ∈𝕂 with |λ|≤ 1. * 𝒳 is complete w.r.t. the metric d(x, y)x-y for all x, y ∈𝒳. Let 𝒳 be a finite dimensional disc-Banach space over 𝕂. Let {τ_j}_j=1^n be a basis for 𝒳 and let {f_j}_j=1^n be the coordinate functionals associated with {τ_j}_j=1^n. The pair ({f_j}_j=1^n, {τ_j}_j=1^n) is said to be a p-orthonormal basis (1<p <∞) for 𝒳 if the following conditions hold. * f_j=τ_j=1 for all 1≤ j≤ n. * For every (a_j)_j=1^n ∈𝕂^n, ∑_j=1^na_jτ_j =∑_j=1^n|a_j|^p. Then we also have the following question. Whether there are versions of Theorem <ref> for p-orthonormal bases 0<p<1? We wish to mention that in <cit.> the functional uncertainty principle was derived for p-Schauder frames which is general than p-orthonormal bases. Thus it is desirable to derive Theorem <ref> or a variation of it for p-Schauder frames, which we can't. We end by asking the following curious question whose motivation is the recently proved Balian-Low theorem (which is also an uncertainty principle) for Gabor systems in finite dimensional Hilbert spaces <cit.>. Whether there is a Functional Balian-Low Theorem (which we like to call Functional Balian-Low-Lammers-Stampe-Nitzan-Olsen Theorem) for Gabor-Schauder systems in finite dimensional Banach spaces (Gabor-Schauder system is as defined in <cit.>)? plain
http://arxiv.org/abs/2307.01644v1
20230704105731
Insert-expansions for Tool-enabled Conversational Agents
[ "Andreas Göldi", "Roman Rietsche" ]
cs.HC
[ "cs.HC", "cs.AI", "cs.CL", "H.5" ]
In-Domain Self-Supervised Learning Can Lead to Improvements in Remote Sensing Image Classification Ivica Dimitrovski ^1,2, Ivan Kitanovski ^1,2, Nikola Simidjievski ^1,3,4, Dragi Kocev ^1,3 ^1Bias Variance Labs, Slovenia ^2Faculty of Computer Science and Engineering, University 'Ss. Cyril and Methodius', N. Macedonia ^3Jozef Stefan Institute, Slovenia ^4University of Cambridge, United Kingdom August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================== This paper delves into an advanced implementation of Chain-of-Thought-Prompting in Large Language Models, focusing on the use of tools (or "plug-ins") within the explicit reasoning paths generated by this prompting method. We find that tool-enabled conversational agents often become sidetracked, as additional context from tools like search engines or calculators diverts from original user intents. To address this, we explore a concept wherein the user becomes the tool, providing necessary details and refining their requests. Through Conversation Analysis, we characterize this interaction as insert-expansion — an intermediary conversation designed to facilitate the preferred response. We explore possibilities arising from this 'user-as-a-tool' approach in two empirical studies using direct comparison, and find benefits in the recommendation domain. § INTRODUCTION Human language is a means both of communication and thought <cit.>. In recent years, it has become much more feasible to process written language computationally. This has been achieved by mimicry of the human brain. In so-called deep learning <cit.>, multiple layers of artificial neurons are used to approximate functions <cit.>, for example, text to labels. Advanced deep learning architecture <cit.>, a focus on generating text <cit.>, and scaling up models <cit.> has led to human-like performance on natural language tasks. Aligning language outputs with human expectations <cit.> and chat capabilities make the current state-of-the-art competitive with human-like performance in tasks outside of traditional natural language processing, e.g., in test-taking <cit.>. These capabilities have been appreciated by both users communicating with models in their daily lives and researchers trying to apply and advance them <cit.>. Since human expectations on speech have become prominent, the paramount mimicry has shifted from biological to social exemplars. Recent developments indicate that yet another source of inspiration is about to be included, namely explicit human cognition and tool-use. These developments have been kick-started by prompting models to think through their assigned tasks step-by-step <cit.>. Reducing complex into simpler tasks has been advocated at least since Cartesius <cit.>. By chaining simple tasks together, i.e., sequentially inputting outputs from previous steps to models, language models can solve more complex problems more reliably <cit.>. In this way, it is possible to call a language model multiple times until a final answer is returned[This is how langchain agents operate: <https://python.langchain.com/docs/modules/agents/>]. Humans can think using writing <cit.>, and language models imitate reasoning by generating written text, step-by-step. Since this paradigm allows intermediate steps, the idea has arisen to insert calls to tools, such as search engines, calculators, or python functions <cit.>. Thereby, the language model is simulating the behavior of a computer user, and can therefore incorporate information accessible via these tools into its final answer. In this way, tool-enabled language models depart from simple function approximators, and become so-called augmented language models <cit.>, with the core human capabilities of reasoning and tool-use <cit.> as the exemplar to be imitated. With social exemplars as a main source of inspiration, chat models have been trained to mimic human speech patterns <cit.>. Now that thought is imitated, less effort needs to be allocated to approximate these surface patterns, since many of them are a result of human reasoning and planning during dialog <cit.>. This means that natural speech patterns may result as a side-effect of more closely imitated reasoning paths. The augmented language models that emulate reasoning are still meant to provide answers, and even impressive reasoning paths will not lead to user satisfaction if they remain hidden, unresponsive, and long-winded. If an answer cannot be given after one or few tool uses, augmented language models will produce intermediate observations that are ever-more divergent from the initial query, since the next step is always prompted by, i.e., conditioned on, the output of the previous steps. In this way, they tend to become side-tracked, and the primary aim, providing a satisfactory answer to the user, may be lost. However, humans sometimes think about what they say. They raise and fulfil or deny each others' expectations; and instead of silently thinking about a final answer, humans will oftentimes probe the interlocutor, by, e.g., checking their understanding, scoping the final answer, or enhancing its appeal. This interactive nature has been extensively studied and formally described using conversation analysis <cit.>. If an appropriate response cannot be given immediately, human speakers tend to insert a new pair of utterances into the conversation, which is supposed to bridge the remaining gap. For example, if someone wants to sell you a souvenir, you will insert a question ascertaining its price before deciding on your final answer. This pattern often relies on explicit reasoning carried out in-between dialog utterances. Augmented language models already talk to themselves and to tools. There have also been recent developments which insert intermediate steps to directly ask users to provide context or check formatting for tool inputs[<https://python.langchain.com/docs/modules/agents/tools/how_to/human_approval>, <https://python.langchain.com/docs/modules/agents/tools/integrations/human_tools>]. This is a potentially powerful mechanism if used in regular chatbot interaction, since for one, it may help to avoid side-tracking in tool-enabled conversational agents because using dialog, common ground can be more easily established between interlocutors, even if one of them is a chatbot <cit.>. Furthermore, it replicates exactly the discussed feature of human talk-in-interaction, namely that of probing interlocutors to support fulfilling or reshaping the expectations they raised in their initial main utterance. In this paper, we will therefore discuss how insert expansions may be used in tool-enabled conversational agents, and how their impact may be studied. For this, we present a paradigm of direct comparison, as well as data from one pilot and two empirical studies based on it. § BACKGROUND Before delving into the study materials and data, it pays to assimilate a better understanding of the nature of insert expansions, and to give a short overview of the nascent field of augmented language models. §.§ Sequence Organisation of Dialog Insert-expansions are one of several distinct building blocks of natural dialog or talk-in-interaction that can be grouped not based on topicality but on what is being done with the utterances belonging to the different blocks <cit.>. Talk-in-interaction, such as with a conversational agent, is successful if to every raised utterance, one of four responses is given: The nominally preferred response, such as agreement to an invitation; the dispreferred response, often a rejection; a temporizing response like "I may be there"; or a blocking response, such as "I have plans already" or a counter in the sense of "What about you?". This base pair of utterances is generally known as "adjacency pair". In natural dialog, such adjacency pairs are often extended by other pairs that are inserted before, between, or after the base pair, which determines the main action of a sequence. Pre-extensions either generically draw attention or gauge interest in the action performed with the specific intended base pair. If drawing attention fails or interest is low, the intended second base pair part may never be uttered. Multiple exchanges can occur before the first part of a base pair. A special kind of pre-extension is the pre-pre-extension, e.g. "Can I ask you something?" - "Yes?", which always precedes other pre-extensions, such as "You now know something about dialogues, don't you?" - "I guess I do". Normally, while pre-extensions are used by the initiator, insert-expansions are used by the receiver, except in multi-turn inserts. They are meant to get the conversation from the raised expectation to its fulfillment. So-called post-first insert expansions serve to recover from misunderstandings and involve acknowledgement of the repair or a restatement of the first part of the base pair. So here, we can check understanding, clarify intent, or gather information. Pre-second inserts, on the other hand, ask for information required to choose between the four options for a second base pair part, for example the time and day for an invitation. This may include scoping the response but also enhancing the appeal (or at least managing expectations). After the second pair part has been uttered, a subdialog may continue with a follow-up. Very often, this happens minimally with so-called sequence closing thirds, such as "Great". Others are more wordy, for example if receivers tack on qualifications like "Just as friends, right?". Especially if the preferred response is not given, initiators may choose to challenge the response or rework the first base pair part to try again. Other options include appending closings, such as "Bye", additional talk on the topic, a topic-shift, or the initiation of a new sequence. In natural dialog, these new sequences are often of the same type by the same initiator, such as in question-series, reciprocating the main action, or following a larger action plan for the dialog. This description of sequence organization based on Schegloff's work on Conversation Analysis <cit.> has empirical support and seems to be language-universal <cit.>. Properties of sequence organization largely generalize to text-based chats as well <cit.>, even though conversations may follow slightly different patterns, which may be investigated using digital conversation analysis <cit.>. However, humanizing chatbots to allow for more natural interaction is a popular way of striving for less friction in human-chatbot interaction, and bears major benefits <cit.>. One way of achieving this is to explicitly anthropomorphize features of the interaction <cit.>. Aligning digital with natural dialog patterns fits nicely in this tradition. Since in most applications, the users steer conversation, adjacency pair expansions of the second speaker are of primary interest for augmented language models, especially if used as conversational agents. Insert-expansions are such expansions, and potentially reduce friction in interaction as well as divergence during reasoning. They either support the first base pair part of an adjacency pair, or aim to bring about the second. Instances relevant to text-based conversations include clarifying intent, scoping responses, and enhancing appeal. §.§ Augmented Language Models Language models can generate textual inputs to arbitrary functions, which themselves may produce other text. This output can then be used as context to generate the next step. If the tool is one of information retrieval, for example, it can produce a final, grounded answer to a user query. This is how tool-augmentation of language models functions at its most basic <cit.>. Unaugmented Large Language Models display so-called formal linguistic competence, i.e., they can handle language in itself. Where they are still lacking is in functional linguistic competence, which means that they cannot do everything humans do with language. This includes formal reasoning like logic or math, using world knowledge, situation modeling in long narratives or discourses, and being able to use communicative intent as in pragmatics or establishing common ground <cit.>. Popular early tools address these issues, and therefore include information retrieval from documents, search engines, and code interpreters including calculators <cit.>. Smaller, more specialized or more easily updated models can also be used as tools <cit.>. Considering the modular make-up of the human brain <cit.>, this indicates that tooling may not only provide a capability for imitating mental activities that are deliberative in humans, but one for imitating functional brain architecture more generally, thus reaching back to a biological exemplar again. The main advantage of augmented language models, however, is not that they are good imitators of actual biological, social, and cognitive processes. They may be. But their usefulness extends to economically more interesting opportunities as well, namely to a deepening of automation. For example, augmented language models have been applied to chemical tasks such as drug discovery <cit.>. Such opportunities incentivize the further development of augmented language models. Consequently, we can expect them to become more common. This is reinforced by the fact that the setup of tools is already being automated <cit.>, as is the chaining of model calls <cit.>. As many tools are simply python functions or calls to well-documented APIs, the code-generating capabilities of even unaugmented language models <cit.> will likely enable future augmented language models to extend their capabilities by themselves. While these advances are spearheaded by closed-sourced state-of-the-art models, augmentation is feasible with open-source models as well <cit.>, even if some suggest better performance by first generating instructions on how to use the tools with such models <cit.>. Such instructions or function metadata may be embedded as well to make the process more efficient <cit.>. Besides automating the augmentation process, and tweaking the efficiency of augmented language models, much focus is also put into keeping models running for longer, so that they can solve more complex tasks. More long-lasting runs may be enabled by extending regular chain-of-thought prompting by plan-and-solve prompting, which can generate more structured reasoning paths that otherwise would have needed to be hard-coded <cit.>. For these longer-running calls, decoupling observations from reasoning may serve to lessen the impact of divergence from the user intent due to misfitting observations <cit.>. However, if supervision by humans is feasible, insert expansion may add additional benefits even here. In addition to cognitive automation, tool-enabled language models may also serve to more closely align with social exemplars. As we have seen in Section <ref>, humans think and talk in iterative fashion to one-another. This interspersing of mutual information gathering and thinking is not idle, it serves to achieve communicative success <cit.>. Besides the benefits of humanizing chatbots <cit.>, imitating this iterative fashion by introducing insert-expansions to tool-enabled conversational agents therefore is expected to have additional inherent value in terms of improved chat outcomes. This short overview should give an idea of why we believe it is important to study augmented language models and tool-enabled conversational agents, and provide the necessary background to understand our empirical approach. § METHODOLOGY There have been efforts to make benchmarks available for tool-augmented language models <cit.>. However, because insert expansions require user input, they are not readily automated. Furthermore, as insert expansions help to shape final responses, these responses will depend on the idiosyncratic user input and cannot be easily standardized. This is why we decided to use human evaluations instead of benchmarking for this paper. For this, we used fluent English speakers from prolific[<prolific.com>]. I our pilot, we recruited n=10 participants (age m=31.50, SD=7.53; 60% female); in Study 1, n=71 (age m=31.45, SD=10.05; 27% female); and in Study 2 n=36[In Study 2, we excluded two additional participants, one because their ratings had no variance while contradicting their qualitative feedback, indicating a misunderstanding in the evaluation; and the other because one of the conversational agents unaccountably asked them for their name, which the participant reported to have put them off using it.] (age m=32.22, SD=12.83; 31% female, 1 other). Sample sizes were based on power estimations; 80% power for a medium effect size in a one-sample, one-tailed t-test for Study 1, and 90% for a large effect for Study 2, which was an attempt at replication with a more suitable scenario. We will now briefly discuss the study artifact, namely a chat interface to two differently configured conversational agents, then discuss how direct comparison of the two works using this artifact, and finally, we will discuss pilot, Study 1, and Study 2 in sequence. §.§ Artifact To explore whether insert expansions would have an impact on tool-enabled chatbot interaction, we created two augmented language model conversational agents, one vanilla, and one with additional user-as-a-tool tools. We are going to call them vanilla and enabled bot from now on. The agents used the python library langchain to prompt the OpenAI model gpt-3.5-turbo-0301[<https://platform.openai.com/docs/models/model-endpoint-compatibility>]. We used existing tools; in Study 1, the tools could query wikipedia and do simple mathematical operations; in Study 2, an embedding-based PDF-reader was included. The user-as-a-tool tools modified the human-as-a-tool tool[<https://python.langchain.com/docs/modules/agents/tools/integrations/human_tools>] from the langchain library, and steered queries via websocket to the user interface. To prompt insert expansions, we simply changed name and description according to some of the purposes of insert expansions delineated in Section <ref>. See Figure <ref> for the tool definitions. In Figure <ref>, the chat interface is shown before distinguishing between the two bots. Participants were instructed on the specific scenario via the placeholder in the input field. This was to ensure that they had to transfer the scenario to working memory before starting it. Having done so, they sent their initial query to both bots, which then appeared on each side of the screen. This stage is depicted in Figure <ref>. They were instructed to use the two bots and compare them directly. After at least 3 bot messages per bot in Study 1 (2 in Study 2), participants could decide to finish the scenario and evaluate the bots (See evaluation modal in Figure <ref>). §.§ Direct comparison The aim of the studies reported herein was an initial comparison of tool-enabled conversational agents with and without insert-expansion capacity. As there was no prior literature on the specific effect, we opted to assess the differences directly, i.e., by making participants aware of both options and asking them to compare. As presenting the bots sequentially would lead to a repeated conversation, we made them available concurrently. To restrain divergence of use, we kept the scenarios short, with only a few turns. This approach allows for relatively small sample sizes and a simple study design that does not necessitate, e.g., balancing. However, any results are only of a correlational nature, as we did not randomize. We see this as appropriate for this stage of research, which is about guiding an emerging phenomenon, and not yet about establishing causal explanations. For the comparison, we used bipolar rating scales, with the placement (left, right) of the two bots as anchors (See Figure <ref>). We deemed this appropriate as the placement constitutes a definite reference without biasing results by naming the bots. We visually indicated the middle of the scale with a small vertical line; however, we did not label intermediary steps so as to not jeopardize equidistance <cit.>. In Study 1, we allowed no-preference, in Study 2, we used a forced-choice procedure by eliminating the midpoint. The former approach does not presume a preference, and may thus be more appropriate as a measure of spontaneous preference; the latter forces participants to decide, and thus evokes cognition about preferences, which may lead to preference formation even if none was spontaneously available. The latter approach therefore captures additional information, which may be useful, for example, if the purpose of a study is to enable managerial decision making, e.g., between two available options. § EMPIRICAL METHODS AND RESULTS §.§ Pilot We used the bot usability scale 15 item version to evaluate both enabled and vanilla bots <cit.>. There was not much difference between the two (m=3.59, SD=0.68 and 3.63, SD=0.25). We therefore decided not to investigate usability directly, as potential differences resulting from the inclusion of insert expansion would likely not show up in the data using our implementation. Besides user testing the function of the bots, evaluation, and database, we also asked pilot participants to provide feedback in open answer format. This included a question on how the study was perceived, how the interface was perceived, and how the two bots differed in their estimation. There was also room to give additional feedback. To assess the binary sentiment on the study and interface (positive or negative), we used a distilbert model <cit.>[<https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english>]. To judge whether vanilla or enabled bots were preferred, we used a Large Language Model[gpt-3.5-turbo-0613 with system prompt "Classify if the right or the left chatbot is preferred. You can only respond with one word, 'left', 'right', 'neutral', or 'unclear', with this exact spelling." Note that this needs 2 tokens because of 'unclear'.]. Of the 10 participants, 70.00% had positive feedback on the study, 90.00% on the interface. Four participants preferred the enabled, two the vanilla bot (2 neutral; 2 unclear). §.§ Study 1 Based on the pilot, we decided to proceed by fixing the reported bugs and replacing the usability scale with potential mediators to better assess which users would prefer the different bots. We conducted this study with n=71 participants (see Section <ref>). §.§.§ Measures In Study 1, we added 7-level bipolar rating scales for direct comparison (See Section <ref>). We were interested in how much control participants experienced; how natural they felt the chats to be; how well their intent was fulfilled in the chats; and how satisfied they were with them. See Table <ref> for items and reliabilities of the constructs. To account for potential mediation, we assessed interindividual difference variables, namely the most prominent personality measure, big 5, using a 15-item scale (BFI-2-XS) <cit.>. As insert expansions exist to smooth interaction by shaping expectations, we also assessed our participants need for cognitive closure (NFC-15) <cit.>. §.§.§ Procedure After a short demographic questionnaire, n=71 participants were given 3 scenarios (see Figure <ref>), rating each before turning to the next (see Figure <ref>). Having completed all scenarios and evaluations, feedback was elicited and interindividual variables were assessed. §.§.§ Feedback To quantify the feedback, we repeated the procedure described for the pilot. Feedback on the study was 69.01% positive (neutral opinions tend to be rated as negative as well, e.g. "I have no opinion about the study."), on the interface only 52.11% (negatives include no feedback at all, neutral statements such as "It was fast and responsive, just feel like the loading is too big and the lettering also", but also some on bugs like "It was fine, although sometimes my prompt would trigger a loading animation that the bot would never reply to, so I had to prompt again, which left the loading anim on the screen for one of the bots but not the other, not a big deal." or "It was frusting when it could not listen or answer all my question"). 46.48% of participants preferred the enabled bot (16.90% preferred vanilla; 2.82% neutral, and the rest unclear). While there was a tendency to see the enabled bot [i.e, the left bot] as more personable, opinions differed, for example, one participant stated "The chatbot on the left had a more personal touch to it. The one on the right felt more like a robot and more focused on facts." but another "The one on the left was robot-like and the one on the right was very human". This goes for overall preference too: "Not sure but I felt the first one was more helpful." versus "I think the right one was more intelligent, I mean, it gave better responses". Other participants gave feedback that nicely puts the two options into contrast, such as "Both are good, but I think the left one is more cautious, and the one on the right answers faster.", "The chat on the left was a lot quicker and asked more questions in terms of my desired answer. The chat on the right was more strict to the point.". So, in summary, adding insert-expansion capability does not automatically make conversational agents superior, especially as the vanilla bot is already trained to fulfill social expectations. Even if this imitation is only on the surface, it is apparently enough for many users. §.§.§ Results See Table <ref> for descriptive statistics, Table <ref> for reliabilities of the direct comparison and Section <ref> for the methodology. The measures are not constant across situations, which means retest-reliability could not be assessed. Intra-class correlation (ICC) using average random raters were ICC=.63 for control, ICC=.61 for naturalness, ICC=.34 for intent-effectiveness, and ICC=.47 for satisfaction. Crohnbach's α for a one-factor structure was .96. The distribution of control ratings was normal; of naturalness normal with noticeable but statistically acceptable kurtosis (-0.88); of intent-effectiveness seemingly bimodal, with a population at the extreme end of the enabled bot, however, with the main distribution at an offset towards the vanilla bot; and satisfaction almost uniform, or potentially multimodal as well, meaning normality assumptions were not fulfilled here; see Figure <ref>. The interindividual variables all approximated normal distributions, with expected ranges. Notably, openness, agreeableness, and conscientiousness were relatively high (out of 5) with 3.76 (SD=.79) 3.75 (SD=.87) 3.78 (SD=.71), cutting off the extreme positive tail. In terms of significance, only control in scenario 1 differed between the two bots (the difference in satisfaction for scenario 1 was marginal with p=0.05985, as tested with a Wilcoxon Signed Rank test). Control in scenario 1 is normally distributed; and a difference was observable in a one-sample, one-tailed t-test (t=-1.8494, df=70, p=0.03431). In the other situations, control was not significantly different from 0, although the tendency persists. This is because the effect size seems to be small (for this t-test, Cohen's d is d=.22). In the other two scenarios, no significant differences were observed (see Figure <ref>). Overall preference (treating all scales as one factor) is marginally not significant with V=827, p=0.05463 in a Wilcoxon Signed Rank test. Looking at the associations between variables, and especially potential mediators, Table <ref> provides an overview. There is visible clustering in the scenarios for the direct comparison measures. For scenario 2, where the bots were supposed to help with solving a riddle, openness seemed to be important across ratings. However, most associations do not exceed τ=.2. §.§.§ Discussion In summary, the impact of adding insert-expansions to tool-enabled conversational agents on user experience depends on the specific use case of the agent. The described measures varied between scenarios, with ICCs under .5 for intent-effectiveness and satisfaction. There was more consistency for control and naturalness, which should therefore be put into more focus for generalizable statements about the role of adding insert-expansions. In this study, we only observed one significant difference, namely for control in scenario 1 (a recommendation scenario). However, taking into account the open answer feedback, this may have been the result of heterogeneous preferences. More specifically, of bimodal distributions, especially in intent-effectiveness, and potentially in satisfaction, meaning that preference depends either on individual user characteristics or specific breakdowns or successes in the particular chats. This is plausible based on the role of insert expansions in conversational repair and the gentle shaping of expectations <cit.>. Satisfaction may even become observably trimodal in larger samples, as some seem to prefer the enabled bot, some the vanilla bot, while others do not have any preference. This may be especially likely if some conversations failed, be it due to excessive or counterproductive insert expansions, or the lack of insert expansions where they would have been helpful. §.§ Study 2 As the necessity of adding insert-expansions depends on scenarios, we added a new scenario for a follow-up study. We chose: "You want to work on the most important sustainable development goal in the 2022 UN report but do not know which it is. Type your message and hit enter to send to both chatbots." In this scenario, both bots had an information retrieval tool and access to the United Nations annual report of 2022 (which is after the training data of the underlying model ended). They were prompted to ask for a grounded recommendation, as Study 1 indicated recommendation as a relevant domain to introduce insert-expansion capabilities. We reduced the number of insert-expansion prompts for the user-as-a-tool tool in the enabled bot, only retaining the scope_response tool (see Section <ref>). In Table <ref>, the difference between chats with and without insert expansion are illustrated. We conducted this study with n=36 participants (see Section<ref>). §.§.§ Measures We retained all measures from Study 1. However, we exchanged the 7-level bipolar rating with midpoint for a 6-level forced-choice bipolar rating. In this, the midpoint was still visually indicated; however, participants had to decide whether they would rather indicate preference for the enabled or vanilla bot. We used the forced-choice approach because in Study 1, many ratings tended towards the midpoint, meaning that only some participants spontaneously formed a preference; however, formed preferences is what we were interested in, and therefore, we elicited preference formation via forced choice. §.§.§ Procedure The procedure remained the same as in Study 1, with the exception that only one scenario was given (for more details, see Figure <ref>). §.§.§ Feedback Both study and interface feedback was positive with 66.67% each. 36.11% preferred the enabled version, 27.77% vanilla, while 13.88% were neutral, and the rest of the preferences unclear. Qualitatively, the feedback largely conformed with the one to Study 1. §.§.§ Results See Table <ref> for descriptive statistics, and refer for reliabilities again to Table <ref> and for the methodology to Section <ref>. One-factor internal consistency was α=.95. For distributions, see Figure <ref>. The smaller sample size leads to less clean normal distributions, rendering non-parametric tests safer. We can still observe potential bimodality in intent-effectiveness and satisfaction, specifically regarding clumping at the extreme end of preference for the enabled bot, which is out of the normal distribution. Extreme preference for the vanilla bot may have been reduced by the choice of scenario. In this study, we observed significant differences from null preference in control and naturalness, both towards the enabled bot (using Wilcoxon Signed Rank tests, for control: V=217, p=0.03427; for naturalness: V=208, p=0.02484). For intent-effectiveness and satisfaction, effects were non-significant (V=271, p=0.3266 and V=299, p=0.2976, respectively), with a tendency towards the vanilla bot (see Figure <ref>). Overall preference (treating all scales as one factor) is, as in Study 1, not significantly different from null preference (V=216, p=0.08296). The variables were interrelated in a similar fashion as in Study 1 (See <ref>). The largest association between situational and interindividual variables was τ=.24 between agreeableness and naturalness (however, non-significantly, with z=1.8626, p=0.06252). §.§.§ Discussion Study 2 replicated a preference for insert-expansion in tool-enabled conversational agents in the recommendation domain. It also extended the marginal difference from null preference in naturalness from Study 1, with a significant difference in the modified scenario. Heterogeneous preferences remain an observable fact in this study as well. However, the assessed mediators were associated with only small effect sizes, similar to Study 1. It was not fruitful to incorporate the assessed interindividual variables into more complex models of participants' preference, as the small magnitude of associations would have required larger samples to produce useful results. § GENERAL DISCUSSION In this study, we introduced the notion of 'user-as-a-tool'. This notion refers to enabling tool-enabled conversational agents with the capacity to use insert-expansions in their conversations, e.g., uttering follow-up questions or enhancing the appeal of their final answer before formulating it. This way of conversing corresponds more closely to human dialog, where many utterances only extend the main interaction <cit.>. Insert expansions are inserted in-between an initial first part of this main action, and the second part, which in tool-enabled conversational agents corresponds to the final response. Recent developments involving tool-enabled or augmented language models beckon a trend towards more imitation of cognitive exemplars in the application of artificial intelligence models <cit.>, including those underlying conversational agents. The more tools augment language models, and the more expansive these tool-capabilities become, the more likely will language models imitate long-stretched reasoning paths and look-ups. The longer the reasoning path, the more detached will answers become from initial requests. That means that, especially for conversational agents, enabling tools may introduce detrimental consequences, in both longer experienced latencies by users, who cannot see the reasoning a model imitates, and potentially worse answers, since the prompting chain slowly diverges from user input. To explore whether such detriments could be addressed by introducing insert expansions to tool-enabled conversational agents, we carried out two studies and a pilot. In these studies, we found that insert-expansions may be useful only in certain scenarios or use cases. We believe this may be because the underlying model of both vanilla and insert-expansion-enabled conversational agents already conformed excellently to social expectations, since it has been trained to do so. In this way, even though social exemplars are imitated in the vanilla version only on a surface level, meaning that follow-up questions or scoping arise without prior reasoning, this difference in sources is often not distinguishable to users. However, especially within appropriate use cases, such as recommendation, mimicry of natural dialog organization may be inferior to imitating dialog that includes reasoning and insert-expansion. For a grounded recommendation use case, we replicated a preference for insert-expansion in terms of control, and provide evidence to suggest a similar effect in terms of naturalness as well. Our results indicate that this difference in the experience of the chats is heavily dependent on the specific scenario. We could not observe the expected effects for help in a riddle with incomplete information and search. This may be due to several reasons. For one, the use cases may not be appropriate. Potentially, the specific scenarios operationalizing these use cases were not implemented in a fitting fashion. Another possibility is that such scenarios would require a longer conversation for effects to materialize. Furthermore, even though our main outcome measures show excellent reliability, their validity cannot be ascertained numerically, since we did not include measures of convergent and discriminant validity. The studies reported herein rely on direct comparison in one-sample cases. Our results, therefore, cannot be treated as causal explanations. Rather, they serve to establish knowledge about the differences and associations of variables relevant to adding insert-expansion capabilities to tool-enabled conversational agents. Yet another limitation of this study is that we did observe bimodality in intent-effectiveness and satisfaction in both studies, however, without providing a clear explanation of how the two populations differ. One of them seems to follow a normal distribution with an offset slightly towards the vanilla agent that did not have insert-expansion capability, and the other clumped at the very extreme of preferring the insert-expansion enabled agent. Based on the assessed potential mediators, big 5 personality and need for closure, this difference is unlikely to be entirely due to interindividual factors. If so, we did not assess the particular factor. More likely is the explanation that this second population which preferred the added insert-expansion capability was the minority directly benefiting from this capability in terms of the quality of answers or absence of conversational breakdown. We have already seen that for most use cases, surface-level mimicry of human dialog patterns may be sufficient. Only in cases where mimicry does not suffice will there be genuine and observable differences in intent-effectiveness and satisfaction with conversational agents. This observation is analogous to the initial motivation for the studies reported herein. Augmented language models imitate not only talk, but also thought. This double use of language competence leads them to outperform unaugmented versions in many instances. However, reasoning comes with a drawback, namely that of diverging from the original user-provided prompt due to ever-more influence of the context added through reasoning and tool-use. It also takes longer to arrive at an answer, which means that user experience suffers if responses only achieve parity with the unaugmented model. By introducing insert-expansion capabilities, we can return much control and disciplining power to the user, rendering it less likely that augmented language models veer off path. However, follow-up questions need to be answered, which takes more effort than simply awaiting a response. And if the follow-up questions are badly chosen, or do not lead to any new relevant information, they may even lead the final answer astray. So, to conclude, adding insert-expansion comes with drawbacks too. Conversations may be longer and require more effort from the user. Exactly like tool-enabled conversational agents, insert-expansion-enabled agents can only serve to improve conversations if they achieve more than parity in their final answers. This they do in recommendation scenarios, at least in terms of perceived user control. They are likely to do so in other use cases as well, and it pays to investigate them. The only caveat is that the use of insert-expansion, similar to the use of tool-enabling, should be constrained to cases where they are superior to the best-guess-quick-fire approach of existing vanilla models, which have been excellently trained to fulfill social expectations. Closer imitation of human talk-in-interaction, based on cognitive besides social exemplars, is only beneficial to the performance of conversational agents if surface-level mimicry does not suffice to satisfy users. However, in scenarios that do necessitate the inclusion of additional information, reasoning and tools will provide benefits, and if this information can only come from the user, the user-as-a-tool approach of adding insert-expansion capabilities to conversational agents will do so as well. § ACKNOWLEDGMENTS This study was supported as part of the Innosuisse Flagship SCESC. unsrt
http://arxiv.org/abs/2307.02565v1
20230705180416
Nonclassicality in correlations without causal order
[ "Ravi Kunjwal", "Ognyan Oreshkov" ]
quant-ph
[ "quant-ph" ]
=1 theoremTheorem lemmaLemma corollaryCorollary conjectureConjecture
http://arxiv.org/abs/2307.08651v1
20230703154040
Multi-fractional Stochastic Dominance: Mathematical Foundations
[ "Ehsan Azmoodeh", "Ozan Hür" ]
q-fin.MF
[ "q-fin.MF", "q-fin.RM" ]
: A Graph Transformer for Semantic Segmentation of 3D Meshes Giuseppe Vecchio1, Luca Prezzavento1, Carmelo Pino2, Francesco Rundo2, Simone Palazzo1, Concetto Spampinato1 1 Department of Computer Engineering, University of Catania 2 ADG, R&D Power and Discretes, STMicroelectronics August 1, 2023 ========================================================================================================================================================================================================================================================================= In the landmark article <cit.>, Müller et. al. introduced the notion of fractional stochastic dominance (SD) to interpolate between first and second SD relations. In this article, we introduce a novel family of multi-fractional stochastic orders that generalizes fractional SD in a natural manner. The family of multi-fractional SD is parametrized by an arbitrary non-decreasing function ranging between 0 and 1 which provides the feature of local interpolation rather than a global one. We show that the multi-fractional (1+)-SD is generated by a class of increasing utility functions allowing local non-concavity where the steepness of the non-concavity depends on its location and it is controlled by function . We also introduced the notion of local greediness that allows us, among other things, to systematically study multi-fractional utility class. The multi-fractional utility class is well-suited for representing a decision maker's preferences in terms of risk aversion and greediness at a local level. Several basic properties as well as illustrating examples are presented. Keywords: (Fractional) Stochastic dominance; Utility function; Local greediness MSC2020 subject classifications: Primary 60E15; Secondary 90B50; 91B06; 91B16 § INTRODUCTION §.§ Overview and motivation Stochastic dominance (SD) is a popular tools for decision making under uncertainty. It allows us to rank different prospects (random variables) by examining their cumulative distribution functions (CDF) through specific (most often) integral conditions. The equivalence of ranking with the expected utility theory can be established through the use of dual utility conditions. This involves comparing the expected values of a class of suitable (utility) functions which reflect the preferences of decision-makers who share common attitudes towards risk. We refer the reader to the excellent textbooks <cit.> for a comprehensive treatment of the general theory as well as applications. Undebatable, first and second stochastic dominance relations serve as the two most well-known stochastic dominance rules. For every two prospects X∼ F and Y∼ G supported on the real line, we say that G dominates F in the first stochastic order (FSD) denoted by F≼_FSD G provided that F(x)≥ G(x) for all x ∈. In the language of the utility theory, such a preference can be expressed through the utility condition: 𝔼[u(X)]≤𝔼[u(Y)], ∀ u ∈𝒰_0 where the utility class 𝒰_0 consists of increasing continuous functions, hence reflecting the preferences of non-satiable individuals i.e., who prefer "more" to "less". Meanwhile, second order stochastic dominance (SSD) pertains to decision makers who are also risk averse and represents their preferences by the utility condition on the set of increasing and concave utility functions denoted by 𝒰_1 namely that we say G dominates F in the SSD sense denoted by F≼_SSD G iff [u(X)] ≤[u(Y)] for all u ∈𝒰_1. It is well known that SSD can be equivalently stated through the integral condition [The differentiability of utility functions is not crucial for developing the theory of SD. We refer to <cit.> for the equivalence between utility and integral conditions of FSD and SSD.]: ∫_-∞^t( F(x) - G(x) )_-dx ≤∫_-∞^t(F(x) - G(x) )_+ dx, ∀ t∈ℝ. The jump between FSD and SSD is substantial. The strict assumption of global risk aversion in SSD can exclude decision makers with mixed risk attitudes i.e. who are mostly risk averse but also exhibit some degree of risk seeking behavior. The preferences of such decision makers can be captured by utility functions that are increasing functions with both concave and non-concave segments, such as those in <cit.>. On the other hand, the lack of assumptions on decision makers' risk attitudes in FSD results in a strong integral condition that has low discriminatory power. As a result, even minor violations in the integral condition can prevent given cumulative distribution functions (CDF) from being ordered in FSD sense. To overcome these limitations, numerous stochastic order relations have been proposed in the literature that fall between FSD and SSD. In <cit.>, Fishburn defined a class of SD relations using fractional integration while <cit.> introduced almost stochastic dominance, which enables the ordering of distribution functions with a slight deviation from FSD. In the seminal work <cit.>, Müller et. al. introduced the concept of fractional (1+γ)-SD which employ a fixed parameter γ∈ [0,1] to interpolate between FSD and SSD. The key advantage of the fractional SD rule is that the corresponding utility class (generator) is known and it provides attractive characteristics from a decision theoretic perspective. Hence, it is gathered a considerable attention in recent literature <cit.>. In this study, we introduce a new family of multi-fractional stochastic dominances that extends the notion of fractional stochastic dominance. Several well known stochastic orders in the literature are defined through the use of utility (generator) counterpart. We refer the reader to <cit.> for a unified approach towards integral stochastic orders. Despite this, due to the nature of our generalization, we formulate multi-fractional stochastic dominance by its integral condition rather than its generator. Hence, for the sake of consistency, we first recall the definition of fractional SD expressed in terms of integral condition. [Fractional (1+γ)-SD] <cit.> For fixed γ∈ [0,1], we say G dominates F in the sense of fractional (1+γ)-SD, denoted by F ≼_(1+γ)-SD G, if ∫_-∞^t( F(x) - G(x) )_-dx ≤γ∫_-∞^t(F(x) - G(x) )_+ dx, ∀ t∈ℝ. It is clear that for γ=0, fractional (1+γ)-SD coincides with FSD whereas for γ=1 it boils down to SSD. Therefore, fractional SD establishes an interpolation between integer degree SD rules. This interpolation is global in a sense that, for a fixed γ∈ [0,1], given distribution functions are ordered by (1+γ)-SD over their entire supports. Our motivation towards generalization of fractional SD is mainly inspired by local interpolation idea where given distribution functions are simultaneously can be ordered from first to second SDs on different portions of their supports. In order to achieve such phenomenon, we relax the condition of parameter γ being constant by replacing it with an arbitrary non-decreasing function :→ [0,1] in the integral condition (<ref>). As we discuss in the sequel, this enables the corresponding integral condition of multi-fractional SD to make use of additional information about the local relationship (captured by ) between given distribution functions, allowing for a more accurate and further interpolation between FSD and SSD. The fractional (1+γ)-SD can be generated by the utility class 𝒰_γ consisting of smooth utility functions u so that 0 ≤γ u'(y) ≤ u'(x) for every x ≤ y. From the mathematical point of view, parameter γ bounds how much marginal utility can decrease as x decreases. Setting γ to 0 results in 𝒰_0 and setting γ=1 leads to 𝒰_1 given in above. From a decision theoretic perspective, parameter γ globally controls the degree of risk loving behaviour, where the higher values of γ corresponds to lower degrees of risk lovingness. When γ is between 0 and 1, the utility class 𝒰_γ contains individuals who are generally risk averse but are willing to accept some risk depending on the value of γ. Unlike fractional (1 + γ)-SD, function plays a crucial role in controlling the local risk loving behaviour of decision makers in terms of average risk aversion and greediness at a local level. The non-decreasing property of allows that multi-fractional SD represent the preferences of decision makers who are mostly risk averse but exhibit a higher degree of risk lovingness when they possess lower levels of wealth. §.§ Plan The paper is organised as follows. Section 2 contains definition of multi-fractional stochastic dominance and several illustrating examples towards understanding the multi-fractional universe. Section 3 focusses on utility theory of multi-fractional stochastic dominance, whereas Section 4 deals with a few economical aspects of the order. Section 5 gathers basic properties of multi-fractional stochastic order. Finally, Section 6 contains a few applications. § MULTI-FRACTIONAL STOCHASTIC DOMINANCE (MFSD) In this section, we introduce the natural generalization of fractional SD as given in Definition <ref> to its multi-fractional form. Following the definition of MFSD, we present a series of examples to motivate our generalization. §.§ Definition and basic examples [Multi-fractional (1+)-SD] Let : → [0,1] be an arbitrary non-decreasing function. Let F and G be two arbitrary distribution functions. We say that G dominates F in the sense of the multi-fractional (1+)-SD, denote by F ≼^mf_(1+)-SD G if ∫_-∞^t( F(x) - G(x) )_-dx ≤(t) ∫_-∞^t(F(x) - G(x) )_+ dx, ∀ t∈ℝ. (i) Apart from fractional SD, another source of inspiration behind generalization (<ref>) is based on the notion of multi-fractional Brownian motion allowing the Hurst parameter H∈ (0,1) varies with time unlike the classical fractional Brownian motion, see <cit.> for details. (ii) Clearly, the multi-fractional (1+)-SD coincides with the fractional (1+γ)-SD as soon as (x) =γ∈ [0,1] for every x. Furthermore, when for some γ∈(0,1), function (t) ≤γ for every t∈, then the multi-fractional (1+)-SD implies the fractional (1+γ)-SD and, similarly, if function (t) ≥γ for every t∈, then the fractional (1+γ)-SD implies the multi-fractional (1+)-SD. (iii) One of our primary purposes is to ponder on the remarkable subject of interpolating between the first and second order stochastic dominances. In this regard, our extension has a significant different feature compare to the major works <cit.>. Recall that, we require function being non-decreasing. In fact, the notion of the multi-fractional (1+)-SD provides us with a local interpolation phenomenon rather than global. This is well explained via Example <ref> below. (iv) Let F and G be two probability distribution functions. Then, it is known that the assumption of equal means (μ_F=μ_G) leads to the given distribution functions being indistinguishable in the sense of the fractional (1+γ)-SD for any constant γ∈ [0,1). Nonetheless, as demonstrated in Example <ref>, the multi-fractional (1+)-SD can effectively order distributions with equal means, provided that an appropriate function i.e. non-constant and satisfying (t)=1 for some t∈, is chosen. In practical scenarios, such as estimating distribution parameters using the method of moments, the need to compare distributions with identical means may arise <cit.>. In these cases, the multi-fractional SD can prove valuable. (v) By our assumptions, function is non-decreasing and bounded, hence it has at most countably many discontinuity of the first kind and clearly, (t) ≤(t^+) for each t. Therefore, due to continuity of the integral operator, without of loss of generality, one can always assume that function is always right continuous. This desirable feature allows us to add one extra layer of randomness to multi-fractional stochastic dominance framework by considering as a distribution function (see Example <ref>). We expect that such a desirable viewpoint provides us extra strengths that can be useful in certain applications. (vi) For the given non-decreasing function , let t_0 := sup{ t : (t) =0 } stands for the last exist time of the set {0} and (t_0)>0. This immediately implies that on the segment (-∞,t_0), the multi-fractional (1+)-SD reduces to that of the FSD and therefore, the value of becomes irrelevant in the sense that if one glues any non-decreasing function so that >0 on the segment (-∞,t_0) to resulting in the new non-decreasing function then, F ≼_(1+)-SD G implies the validity of F ≼_(1+)-SD. (vii) In a similar way, an extension can be accomplished for the combined order concave-convex stochastic dominance, see <cit.> for detail. However, for the ease of development, we defer it to a separate study. [The role of non-decreasing assumption] The assumption that the function :→ [0,1] is non-decreasing is crucial for Definition <ref> to be meaningful. In fact, let be a non-constant, non-increasing function. Then, any value other than the minimum value of will not impact the ordering rule of the given distribution functions. This is because smaller values of lead to stronger dominance rules, and therefore, resulting in, the smallest value of function determines the ordering relation throughout the entire support. Let us to demonstrate this with a simple example. Let t_0∈. Consider the (non-constant, non-increasing) function :→ [0,1], (t)= 1 if t < t_0 0 if t_0 ≤ t. Now, assume that F ≼^mf_ (1+ )-SD G. Then, by Definition <ref>, for all t ∈ (-∞, t_0) (note that (t)=1 over the interval), we have ∫_-∞^t( F(x) - G(x) )_-dx ≤∫_-∞^t(F(x) - G(x) )_+ dx, and on the other hand, for all t ≥ t_0, inequality (<ref>) implies that G(t) ≤ F(t) for all t ∈. Clearly, the latter implies that (F(t) - G(t))_- = 0 for all t ∈, thus eliminating the impact of value of (t)=1 on the integral condition (<ref>) and hence on the ordering of F and G as well. The importance of non-decreasing assumption can be observed in more general situations in a similar way. [Local interpolation] The purpose of this example is to illustrate one of the main features of MFSD, namely that, the integral condition <ref> equips us with local interpolation between the first and second order stochastic dominance rules, rather than the global. Let γ∈(0,1) be a constant. Consider the following non-decreasing function (t)=0 if t ≤ t_1 γ if t_1 < t ≤ t_2 1 if t>t_2 where t_1<t_2<t_3 are real numbers. Assume that F ≼^mf_ (1+ )-SD G. Then, (t_1)=0 entails that G(x) ≤ F(x) on the portion (-∞, t_1) of the full supports. In other words, we have F ≼_FSD G on the interval (-∞, t_1). Similarly, the condition (t_2)=γ yields that ∫_-∞^t( F(x) - G(x) )_-dx ≤γ∫_-∞^t(F(x) - G(x) )_+ dx, ∀ t≤ t_2, i.e., F ≼_(1+γ)-SD G on the interval (-∞, t_2] (this directly follows from the non-decreasing property of function ). Lastly, (t)=1 for all t∈(t_2,∞) implies that ∫_-∞^t( F(x) - G(x) )_-dx ≤∫_-∞^t(F(x) - G(x) )_+ dx, ∀ t∈ℝ, i.e., G dominates F in second order stochastic dominance sense, F ≼_SSD G. Hence, with appropriately chosen non-decreasing function , one can simultaneously order given F and G from first (strongest) to second (weakest) order on different portions (capture with function ) of the supports. Later on, we discuss the reflections and impacts of the local interpolation phenomenon in terms of utility classes (see Remarks <ref>) and <ref>). [The equal-mean scenario] The purpose of this example is to construct two distribution functions F and G with the identical means (μ_G=μ_F) in which it is impossible to distinguish them according to the fractional (1+γ)-SD for any given constant γ∈ [0,1), however distinguishable based on a (more informative) function . We exclude the limiting case γ=1 due to the fact that it corresponds to (less interesting) SSD. We introduce (t) = 0 t < 0, t 0 ≤ t ≤ 1, 1 t>1. We also consider the following distribution functions with identical means, μ_F=μ_G=23/32: F (x) = 0 x<0, x 0 ≤ x < 1/4 1/4 1/4 ≤ x < 3/4 1/2 3/4 ≤ x < 1 1 x ≥ 1 and G(x) = 0 x < 1/4 x-1/4 1/4 ≤ x < 1 1 x ≥ 1. The straightforward calculation shows that the integral condition (<ref>) holds, namely that ∫_0^t( F(x) -G(x))_- dx ≤(t) ∫_0^t( F(x)-G(x) )_+ dx, ∀ t ∈. Hence, F ≼^mf_(1+)-SD G . However, F _ (1+ γ)-SD G for any constant γ∈ [0,1). The reason for this is that the inequality (<ref>) fails at t=1, since 0= μ_G-μ_F =∫_0^1( F(x)-G(x) )dx =∫_0^1( F(x)-G(x) )_+ dx - ∫_0^1( F(x)-G(x) )_- dx however observe that (1)=1. Lastly, to complete the example we require to demonstrate that with function given in <ref>, the multi-fractional (1+)-SD does not coincide to SSD. To accomplish that, we introduce the following distribution functions: F(x) = 0 x <0 1/4 0 ≤ x < 1/2 1 x ≥ 1/2 and G(x) = 0 x <1/4 1/2 1/4 ≤ x < 1/2 1 x ≥ 1/2. It is clear that F ≼_SSD G, however, note that (at t=1/2 with (1/2) =1/2): ∫_0^1/2( F(x)-G(x) )_- dx ≰ 1/2 ∫_ 0^1/2( F(x)-G(x) )_+ dx. Hence, integral condition (<ref>) fails and therefore, F ^mf_ (1+ )-SD G although F ≼_SSD G. In general, for any given non-decreasing ≢1, one can always construct two distribution functions F and G such that F ≼_SSD G but F ⋠^mf_ (1+ )-SD G. §.§ Further illustrative examples In this section, with the help of a couple of illustrating examples, we aim to justify not only the usefulness of multi-fractional stochastic dominance rule given in Definition <ref> but also we discuss the non-trivial relationship between two notions of fractional and multi-fractional stochastic dominance rules. This example generalizes Example <ref> beyond the identical mean scenario. More precisely, we show the following phenomenon that for every arbitrary given constant γ∈ (0,1) there exist two distribution functions F and G with the non-identical means (μ_G > μ_F) [Note that without loss of generality, one can also assume (μ_G ≥μ_F). Our motivation for excluding the case μ_G = μ_F is to demonstrate that the assumption of identical means is sufficient, but not necessary, for given distributions F ≼^mf_ (1+ )-SD G but F ⋠_(1+γ)-SD G.] such that we cannot distinguish them based on the fractional (1+γ)-SD however, there is (at least) one suitable function :ℝ→ [0,1] which makes F and G distinguishable based on the multi-fractional (1+)-SD. Let γ∈ (0,1) be fixed. Choose N ∈ so that 2γ/N <1. We introduce the following distribution functions: F(x) = 0 x <0 γ/N 0 ≤ x < (γ+γ^3/2)/N 1 x ≥ (γ+γ^3/2)/N G(x) = 0 x < γ/N 2γ/N γ/N ≤ x < (γ+γ^3/2)/N 1 x ≥ (γ+γ^3/2)/N. First, note that we have γ/N+(1-2γ/N)(γ^3/2/N) = μ_G>μ_F = (1-γ/N)((γ+γ^3/2)/N). In addition, at x=(γ+γ^3/2)/N, we obtain γ^5/2 / N^2 = ∫_-∞^(γ+γ^3/2)/N( F(x) - G(x) )_- dx ≤∫_-∞^(γ+γ^3/2)/N( F(x) -G(x) )_+ dx =γ^2 / N^2 Hence, the integral requirement (<ref>) fails at point x=(γ+γ^3/2)/N because γ<√(γ)<1 for all γ∈ (0,1). Hence, F ⋠_(1+γ)-SD G. One has to note that F ≼_SSD G since γ^5/2/N^2<γ^2/N^2. Next, consider the following function: (t) = _1(t) t <γ/N (t) γ/N ≤ t ≤ (γ+γ^3/2)/N _2 (t) t > (γ+γ^3/2)/N where _1 and _2 can be any non-decreasing functions that do not violate the non-decreasing assumption of and (t) ∈ (0,1) can be taken any non-decreasing function satisfying: (t) ≥∫_-∞^t( F(x) - G(x) )_- dx /γ^2 / N^2=γ/N(t-γ/N)/γ^2 / N^2, ∀ t ∈ [γ/N,(γ+γ^3/2)/N]. Clearly, one obvious choice of such a function is: (t) = 0 t <γ/N γ/N(t-γ/N)/γ^2 / N^2 γ/N ≤ t ≤ (γ+γ^3/2)/N 1 t > (γ+γ^3/2)/N. Then, it is straightforward to check that F ≼^mf_ (1+ )-SD G holds. This example is rather technical. The purpose of the example is to show that for any given constant γ∈ (0,1) and every (non-decreasing) function satisfying (t^0) < γ < (t_0) for some t^0<t_0 (see Remark <ref>, part (ii)), there exists distribution functions F and G so that either one the followings holds: (i) F ≼^mf_ (1+ )-SD G but F _(1+γ)-SD G (ii) F ^mf_ (1+ )-SD G but F ≼_(1+γ)-SD G. First, F ≼^mf_ (1+ )-SD G but F _(1+γ)-SD G. We discuss separately the following two possible cases: Case 1: There is a left-side neighbourhood N_δ(t_0) = (t_0-δ,t_0), δ >0 such that function (t)=(t_0) for all t ∈ N_δ (t_0). Choose ℓ < δ small enough (in the sense that (t_0) - ℓ > γ). Now, consider the square with the length ℓ and with the right-up corner at (t_0,(t_0)). Let's denote the square with S(ℓ). Next, there are two natural numbers M, N ∈ so that γ < M/N < (t_0) (rational numbers are dense in ). Without loosing of generality, we can assume that (M+N)/MN < ℓ^-1 (otherwise we work with KM, and KN with large K integer. This keeps the ratio unchanged however note that KN +KM/(KM)(KN) = (1/K) × (M+N)/(MN) → 0 as K →∞). Now, we equally divide S(ℓ) to (MN)^2 little squares each one has the length ℓ/(MN). Let's denote the t-coordinate of the left-bottom corner of S(ℓ) by t^* such that t^* + 2ℓ/MN ∈ N_δ(t_0) and define F(t) = 0 t <t^* N (ℓ/MN) t^* ≤ t < t^* + 2ℓ/MN 1 t ≥ t^* + 2ℓ/MN G(t) = 0 t <t^* + ℓ/MN (M+N) ℓ/(MN) t^* + ℓ/MN ≤ t < t^* + 2ℓ/MN 1 t ≥ t^* + 2ℓ/MN . Then, we have A_+:= ∫_t^*^t^* + ℓ/MN( F(x) - G(x) )_+ dx = N (ℓ/MN)(ℓ/MN) A_- : = ∫_t^* + ℓ/MN^t^* + 2ℓ/MN( F(x) - G(x) )_- dx = M (ℓ/MN)(ℓ/MN) Therefore, we have F ≼^mf_ (1+ )-SD G but F ⋠_(1+γ)-SD G since γ < A_-/A_+= M/N < (t^*+ℓ/MN)=(t^*+2ℓ/MN)= (t_0). Case 2: Function is not flat at any left-neighbourhood of t_0. If is left-continuous at t_0, then the construction boils down to that of the previous case with two natural numbers M, N ∈ where γ < M/N < inf_t∈ N_δ(t_0) (t) so that we have γ < A_-/A_+= M/N < inf_t∈ N_δ(t_0) (t)≤(t^*+ℓ/MN)≤(t^*+2ℓ/MN) ≤ (t_0). Otherwise, namely that (t) < γ on a left neighbourhood N_δ (t_0). In this scenario, since function is non-decreasing, we can assume that on a tiny right neighbourhood of t_0 function is flat. Now, we can similarly construct two distribution functions F and G in the very tiny square S(ℓ) on the right hand side of t_0 (the left-bottom corner is t_0 instead of t^*). Note that this time constructed F and G take 0 on (-∞,t_0) thus, the non-pleasant values of i.e. (t)< γ for t<t_0 become irrelevant. Second, F ≼_ (1+γ)-SD G but F _ (1+ )-SDG. Recall that there exists (at least one) t^0 such that (t^0) < γ. Again the construction is the same as above, meaning that after t^0 we define F and G the same and discuss in a left neighbourhood of t^0 whether is flat or not and continue as before. The conclusion is that in both non-trivial scenarios, one can always construct two distribution functions F and G so that there are not distinguishable by the fractional (1+γ)-SD but possible to order them with the multi-fractional (1+)-SD and vice versa. §.§ The universe of multi-fractional stochastic dominance In this section, we formalize all prior observations within a structured framework. Let D() denote the set of all distribution functions (of the real-valued random variables). For every constant γ∈ [0,1] and, non-decreasing function : → [0,1], let introduce E_f(γ) : = {(F,G) ∈D() ×D() : F ≼_(1+)-SD G }, E_mf() : = {(F,G) ∈D() ×D() : F ≼^mf_ (1+ )-SD G }. Also, we denote E_f(1) = E_ssd. Note that, clearly we have the set identity E_mf() = E_f(γ) as soon as ≡γ is the constant function, and furthermore, when (t ) ≥γ ( (t ) ≤γ) for every t ∈, the set inclusion E_f(γ) ⫋ E_mf() ( E_mf() ⫋ E_f(γ)) holds. Lastly, Example <ref> shows that for every constant γ∈ (0,1) and every (non-decreasing) function satisfying (t^0) < γ < (t_0) for some t^0<t_0, we have E_mf() ∖E_f(γ) ≠∅ and E_f(γ) ∖E_mf() ≠∅ . The next result aims to summarise the non-trivial relations between the corresponding universes. Let :ℝ→ [0,1] be an arbitrary non-decreasing function and, define :=lim_t→ +∞(t)=sup_t∈ℝ(t) and := lim_t→ -∞(t)=inf_t ∈(t). Then, the following statements hold: (a) The inclusions E_f() ⊆E_mf() ⊆E_f() holds and are strict as soon as is a non-constant function. (b) The corresponding universes associated to fractional and multi-fractional stochastic dominances coincide with E_ssd, namely that ⋃_γ∈[0,1] E_f(γ) = ⋃_:→ [0,1] non-decreasing E_mf() = E_ssd. In addition, if the non-interesting case i.e. ≡γ=1 is excluded from both sides, then the relation (<ref>) reduces to ⋃_γ∈[0,1) E_f(γ) ⋃_:→ [0,1] non-decreasing ≢1 E_mf() = E_ssd. Part (b) of Theorem <ref> demonstrates a significant feature of multi-fractional stochastic dominance in the sense that by removing the constant function ≡γ=1 (corresponding to SSD case), still the universe of multi-fractional stochastic dominance remains as large as that of SSD whilst this is not the case for fractional stochastic dominance. (a) The inclusions are immediate consequence of the monotonicity of integral condition (<ref>)(see Remark <ref> (ii)). We only prove the strict inclusion case. Suppose that is not a constant function and so that there exists at least one t∈ with (t)>. Then, we can always find a constant c∈(0,1) satisfying √(c)+√(c) (t)<1. Fix such c and construct the following pair of distribution functions (F^*,G^*): F^*(x)= 0 x< t-√(c) √(c) t-√(c)≤ x < t+√(c) 1 t+√(c)≤ x G^*(x)= 0 x< t √(c)+√(c) (t) t ≤ x < t+√(c) 1 t+√(c)≤ x . Then, F^* and G^* have a single crossing at t and A_+:= ∫_-∞^t (F^*(x)-G^*(x))_+dx= c A_- := ∫_t^t+√(c) (F^*(x)-G^*(x))_-dx= (t)c. Note that <A_-/A_+ =(t)≤(x), ∀ x≥ t. Thus, (F^*,G^*)∈E_mf() but (F^*,G^*)∉E_f(). This shows that the first inclusion is strict. Next, we show that the second inclusion E_mf()⫋E_f() is strict as well. Similarly, for a given non-constant , first we fix t∈ such that >(t) holds. Then, we can always find a constant c∈(0,1) satisfying √(c)+√(c) <1. Fix such c and construct the following pair of distribution functions (F^*,G^*): F^*(x)= 0 x< t-2√(c) √(c) t-2√(c)≤ x < t 1 t ≤ x G^*(x)= 0 x< t-√(c) √(c)+√(c) t-√(c)≤ x < t 1 t ≤ x . Again, F^* and G^* have a single crossing at t-√(c) and moreover, A_+=: ∫_-∞^t-√(c) (F^*(x)-G^*(x))_+dx= c A_-=: ∫_t-√(c)^t (F^*(x)-G^*(x))_-dx= c. This time, observe that (F^*,G^*)∈E_f() but (F^*,G^*)∉E_mf() since (t)<A_-/A_+ = meaning that the integral condition (<ref>) fails at t. (b) First, the set equalities in (<ref>) are obvious. Next, note that the first strict inclusion in relation (<ref>) is a direct application of Example <ref>. Finally, we show that the second set equality in relation (<ref>). First, we have ⋃_:→ [0,1] non-decreasing ≢1 E_mf() ⊆ E_ssd. In order to demonstrate the converse, let (F,G) be an arbitrary element of E_ssd. Then, w.l.o.g, there exists the first crossing point x^*∈ (otherwise, we have F ≼_FSD G and the result becomes trivial) for the pair (F,G) meaning that ∫_-∞^x^*(F(x) - G(x) )_+dx>0, ∫_-∞^x^*(F(x) - G(x) )_-dx=0 and F(x^*)=G(x^*). Now, we define the following function: (F,G)(t):= 0 t≤ x^* sup_x^* ≤ s ≤ t∫_x^*^s(F(x) - G(x) )_- dx/∫_-∞^s(F(x) - G(x) )_+ dx x^*≤ t. Clearly, (F,G) is a non-negative, non-decreasing, non-constant, continuous function and it is bounded by one. Therefore, (F,G)∈{: : → [0,1], ≢1 }, and moreover, we have F ≼^mf_ (1+(F,G))-SD G. This completes the proof. § MFSD: UTILITY THEORY In this section, first we introduce the set of utility functions that generates the multi-fractional (1+)-SD. Then, we prove the if-and-only-if theorem between the introduced utility class and the integral condition (<ref>). §.§ Towards a utility class We start with the following simple lemma motivated by the excellent work <cit.>. In fact, the lemma below somehow is encoded the structure of a suitable utility class for multi-fractional stochastic dominance. Let : ℝ→ [0,1] be an arbitrary non-decreasing function ((x)≤(y) for all x ≤ y). For each t ∈, we consider the following functions space: 𝒰^t_(t): ={ u:→ : u(x)=u(t), ∀ x ≥ t and, 2cm ∀ x_1 < x_2 ≤ x_3 <x_4, 0≤(t) u(x_4) -u(x_3)/x_4-x_3≤u(x_2)-u(x_1)/x_2-x_1}. Let F and G be two arbitrary distribution functions. For every t ∈, define the (utility) function u_t via its right derivative as u_t^'(x)= (t) if G(x) ≤ F(x) and x ≤ t 1 if F(x) < G(x) and x ≤ t 0 if t<x. Then, u_t ∈𝒰^t_(t) for every t ∈. First, we avoid of the trivial case (t)=0 (since by very definition, function u is monotone). On the other hand, since F and G are right continuous and have countably many discontinuity points (of the first kind since there are monotone) hence there is a countable set of points (x_i) so that x_i-1 < x_i, and u'_t is a constant function on each subinterval (x_i-1,x_i). Therefore, by the mean value theorem, for every x and y on each subinterval, we have u_t(y)-u_t(x)/y-x=1 or u_t(y)-u_t(x)/y-x=(t). This, in particular implies that for all x,y∈ℝ, (t)≤u_t(y)-u_t(x)/y-x≤ 1. Thus, for all x_1<x_2≤ x_3<x_4, the relation (t) u_t(x_4) -u_t(x_3)/x_4-x_3≤u_t(x_2)-u_t(x_1)/x_2-x_1 holds true. In addition, u_t is a constant function on region (t,∞) and hence, by continuity we have u(x)=u(t) for every x ≥ t. Therefore, u_t ∈𝒰^t_(t) for every t ∈. Let X∼ F and Y∼ G be two arbitrary probability distributions having finite first moments and, : ℝ→ [0,1] be a non-decreasing function. Then, relation [u_t(X)] ≤[u_t(Y)], ∀ t ∈ implies that, for every t ∈: ∫_-∞^t ( F(x)-G(x))_-dx ≤(t)∫_-∞^t ( F(x)-G(x))_+dx. Let t ∈ and u_t be the function defined as (<ref>). Then, the assumption of having finite means and the form of u_t function imply that [u_t(X)], [u_t(Y)] > -∞. In addition, one has to note that u_t(x) ≤ ax+b, a>0, b∈ for each x∈. Hence, the assumption of finite means, once again, imply that [u_t (X)], [u_t(Y)] <∞ and therefore all the quantities below are well-defined. Hence, by integration by parts formula (see <cit.>), we obtain 0 ≤[u_t(Y)]-[u_t(X)] = ∫_ u_t (x) d(F-G)(x) = ∫_(F(x) -G(x)) du_t (x) = ∫_ u^'_t(x) (F(x)-G(x)) dx =∫_-∞^t(t) (F(x)-G(x))_+dx - ∫_-∞^t(F(x)-G(x))_-dx = (t) ∫_-∞^t(F(x)-G(x))_+dx-∫_-∞^t(F(x)-G(x))_- dx. It is worth to mention that the very last integral in the right hand side of the first line above can be understood as the Lebesgue-Stieltjes and Riemann-Stieltjes integral. In addition, note that u_t is right continuous and u'_t has only countably many discontinuities and therefore, the second line follows from the fact that u_t is defined as the Lebesgue integral of u'_t (fundamental theorem of calculus). Next, we seek for an appropriate utility class for the multi-fractional (1+)-SD. We take a naive approach. Inspired by Lemma <ref>, a desirable utility class should contain those "basis" functions (u_t : t ∈) that are defined in (<ref>). Observe that these functions enjoy two important features: (i) they are constant over the region (t,+∞) and hence their derivatives vanish (this in particular implies that no contribution in the expected utility through the integration by parts formula), (ii) in addition, on the complementary range (-∞,t) they should manifest the fractional (1+γ)-SD utilities with (t) instead of a fixed parameter γ. Motivated by these observations, we introduce the following function spaces: for real s,t ∈, define 𝒰^s_(t): ={ u:→ : u(x)=u(s), ∀ x ≥ s and, 2cm ∀ x_1 < x_2 ≤ x_3 <x_4, 0≤(t) u(x_4) -u(x_3)/x_4-x_3≤u(x_2)-u(x_1)/x_2-x_1} 𝒰^s_(t) (C^1()): ={ u :→∈ C^1() : u'(x) =0, ∀ x > s and 0 ≤(t)u^'(y) ≤ u^'(x), ∀ x≤ y }. Let us recall the following function spaces introduced in <cit.>: for each real number γ∈ [0,1], 𝒰^*_γ : ={ u:→ : 0≤γu(x_4) -u(x_3)/x_4-x_3≤u(x_2)-u(x_1)/x_2-x_1, ∀ x_1 < x_2 ≤ x_3 <x_4 } 𝒰_γ : = { u :→∈ C^1() : 0 ≤γ u^'(y) ≤ u^'(x), ∀ x≤ y }. Characteristic (i) allows the utility class (<ref>) to incorporate additional information about decision makers' preferences that 𝒰^*_γ does not capture. This results in more informative utility classes, meaning that 𝒰^s_(t)⫋𝒰^*_(t) for all s≤ t. However, it is important to note that this information does not necessarily indicate a meaningful behavior of decision makers in terms of utility theory. In fact, the utility functions in (<ref>) are considered to be pathological and contain extreme preferences that do not accurately represent realistic decision maker behavior, as discussed in <cit.>. Therefore, while we use this set as the foundation for constructing the utility class of MFSD, our main focus is on the mathematical aspects of the functions in (<ref>). Understanding the mathematical properties of these functions will be essential in proving several key results later in this paper. Functions spaces 𝒰^t_(t) and 𝒰^t_(t) (C^1()) (namely that when s=t) play important role in our development. The functions space 𝒰^t_(t) is particularly important for us as it contains utility functions with jump discontinuities in their first derivative (kink). Note that, the relation between two spaces 𝒰^s_(s) and 𝒰^t_(t) for s<t is not straightforward. The reason is that the set 𝒰^t_(t) is equipped with two conditions which give rise to two opposite inclusion relations. To be more precise, for s≤ t (and hence (s)≤(t)), we have u'(x) =0, ∀ x > s implies that u'(x) =0, ∀ x > t, 0 ≤(t)u^'(y) ≤ u^'(x), ∀ x≤ y implies that 0 ≤(s)u^'(y) ≤ u^'(x), ∀ x≤ y. Let s ≤ t. Then, the following set-inclusions are in place: (a) 𝒰^t_(t) (C^1()) 𝒰^t_(t). (b) 𝒰^s_(t)𝒰^t_(t), and 𝒰^s_(t)𝒰^s_(s). (c) 𝒰^s_(t) (C^1()) 𝒰^t_(t) (C^1()), and 𝒰^s_(t) (C^1()) 𝒰^s_(s) (C^1()). The following proposition summarizes important mathematical properties of the functions space 𝒰^t_(t). Let :ℝ→ [0,1] be an arbitrary non-decreasing function and t∈. (a) Assume that (t) ≠ 0. Then, every u∈𝒰^t_(t) is non-decreasing function that is right and left differentiable at every point x∈. When (t)=0, then every u∈𝒰^t_(t) is an increasing function, bounded from above (in fact u(x) =u(t) for all x ≥ t), and almost everywhere differentiable. (b) The class 𝒰_(t)^t is a convex cone, i.e., it is closed under multiplication with positive scalars and being a convex set. (c) Limit lim_t→ -∞𝒰^t_(t) as t → -∞ exists and, moreover, lim_t→ -∞𝒰^t_(t)=:𝒰^-∞_ ={ constant functions }. (d) Limit lim_t→∞𝒰^t_(t) as t → +∞ exists and, moreover, lim_t→∞𝒰^t_(t)=𝒰^+∞_ where 𝒰^+∞_ := { u : 0 ≤u(x_4) -u(x_3)/x_4-x_3≤u(x_2)-u(x_1)/x_2-x_1, ∀ x_1 < x_2 ≤ x_3 <x_4}. (a) When (t) ≠ 0, it is elementary to show that every functions u ∈𝒰^t_(t) is right and left differentiable at every point whereas, the case (t)= 0 follows from Lebesgue Theorem <cit.>. (b) Obvious. (d) Let (t_n) be a sequence of real numbers diverging to infinity. First note that by inclusion relations (<ref>) and (<ref>) we have ⋂_k=n^∞𝒰^t_k_(t_k)=𝒰^t_n_. Thus, lim inf_n→∞𝒰^t_n_(t_n)=⋃_n≥1𝒰^t_n_=𝒰^+∞_. Similarly, we can show that lim sup_n→∞𝒰^t_n_(t_n)=⋂_n≥ 1⋃_k≥ n^∞𝒰^t_k_(t_k) =𝒰^+∞_. Therefore, lim sup_t_n→∞𝒰^t_n_(t_n)=lim inf_t_n→∞𝒰^t_n_(t_n) = 𝒰^+∞_. This completes the proof. (c) Similar to part (d). (i) Similar statements as in Proposition <ref> hold true for smooth functions spaces 𝒰^t_(t) (C^1()). (ii) The limiting sets 𝒰^+∞_ and 𝒰^+∞_(C^1()) coincide with the utility class 𝒰^*_ and 𝒰_ corresponding to the fractional (1+)-SD introduced in <cit.>. In particular, when =1 it consists of non-decreasing concave functions associated to the second order stochastic dominance. (iii) One has to note that 𝒰^-∞_≠𝒰^*_ (in fact, we have the crystal strict inclusion 𝒰^-∞_⊂𝒰^*_ ) unlike 𝒰^+∞_= 𝒰^*_. §.§ Utility space 𝒰^mf_ and the main result For each t ∈, the class 𝒰^t_(t) contains (utility) functions with limited variety of characteristics. Therefore, by gathering spaces 𝒰^t_(t) for all t∈ [-∞,∞] one can create a larger class which contains a broader range of utility functions with different attributes. Define 𝒰^Union_:=⋃_t∈ [-∞,∞]𝒰^t_(t), and 𝒰^Union_ (C^1()):=⋃_t∈ [-∞,∞]𝒰^t_(t)(C^1()). In above, for the technical reasons (see Remark <ref>), when (t)=0, we restrict the utility space 𝒰^t_(t) to consist of non-decreasing continuous functions. As it follows from Proposition <ref> (b), although for each t ∈ [-∞,∞], the class 𝒰^t_(t) is a convex cone, however, 𝒰^Union_ is not necessarily closed under (arbitrary) addition. Since the sum of utility functions would naturally result in a new utility function, and expected utilities are linear, hence, the class 𝒰^Union_ alone cannot serve as a candidate for utility class of multi-fractional stochastic order. However, it is simple to overcome the latter issue by considering all the possible positive (finite) linear combinations. Hence, we finally introduce the following (utility) class. [Utility class of MFSD] Let : ℝ→[0,1] be an arbitrary non-decreasing function. A (utility) class for the multi-fractional (1+)-SD is defined as 𝒰^mf_ := _ℝ_+{u : u∈𝒰^Union_} = { u = ∑_finiteλ_k u_k, λ_k ≥ 0, u_k ∈𝒰^Union_}. Finally, we are ready to state the main theorem of this section. First, we state the following simple lemma that we will make use of it in the proof of the main theorem. Let : ℝ→ [0,1] be an arbitrary non-decreasing function. Let F ≼^mf_(1+)-SD G. Fix t∈ℝ. Then, we have ∫_-∞^s( F(x) - G(x) )_-dx ≤(t) ∫_-∞^s( F(x) - G(x) )_+ dx, ∀ s≤ t. This is a direct consequence of the fact that is a non-decreasing functions. Let : ℝ→[0,1] be an arbitrary non-decreasing function. For every two distribution functions X∼ F and Y∼ G, the following statements are equivalent: (a) F ≼^mf_(1+)-SD G. (b) [u(X)]≤[u(Y)] for every u ∈𝒰^mf_. Some parts of proof of Theorem <ref> involve rather mathematical technicalities that are discussed in a great care in <cit.>. In turns out that assumptions of finite means together with the well-definiteness of the quantity [u(X)] - [u(Y)] provide us with the most transparent scenario. Hence, in the proof below we assume that those hypotheses are available without mentioning in each required instance. The implication (b) to (a) is just Proposition <ref>. Hence, we prove (a) implies (b). Due to the linearity of the mathematical expectation and the structure of the elements in the class 𝒰^mf_ it is enough to show that 𝔼[u(X)] ≤𝔼[u(Y)] for every u ∈𝒰^t_(t) where t ∈ (-∞,∞]. Let assume that t=∞. Pick u ∈𝒰^+∞_ = 𝒰^*_. Now, observe that the integral condition (<ref>) clearly yields that the integral condition ∫_-∞^t (F(x)-G(x))_- d x ≤∫_-∞^t(F(x)-G(x))_+ d x, ∀ t∈ℝ, that is F ≼_ (1+ )-SD G and therefore, the result directly follows from <cit.>. Next, let t ∈ℝ and pick an arbitrary u ∈𝒰^t_(t). Since u is constant after t, hence, we need to show that [u(Y)] - [u(X)] =∫_-∞^∞ u(x) dG(x) - ∫_-∞^∞ u(x) dF(x)= ∫_-∞^t ( F(x)-G(x) )du(x)≥ 0. First, we discuss the case (t) ≠ 0. Hereafter, and in the lights of Lemma <ref>, we follow the arguments provided in the proof of Theorem 2 in <cit.> (see also proof of Theorem 2* in <cit.> for more detailed discussion). Now, define transformation T as Ts:=inf{ s^' : (t)∫_-∞^s^'( F(x)-G(x))_+dx = ∫_-∞^s(F(x)-G(x))_-dx } such that (t)∫_-∞^Ts(F(x)-G(x))_+dx = ∫_-∞^s(F(x)-G(x))_-dx, ∀ s≤ t. Then, the following holds: (i) Transformation T: s≤ t ↦ T(s) as in (<ref>) exists since the mapping s ↦(t)∫_-∞^s(F(x)-G(x))_+dx is continuous and the integrand ( F(x)-G(x) )_+ ≥ 0. However, T may take value -∞. In fact, let x^* be the first crossing point of F and G (w.l.o.g we can assume exists, otherwise, it means that we are in the first order stochastic dominance setup). Then, clearly, T(s) =-∞ on the interval (-∞,x^*). (ii) Given the integral condition in Lemma <ref>, it is clear that in order for the equality in (<ref>) to hold, Ts≤ s for all s≤ t. (iii) T is a non-decreasing transformation of s since ∫_-∞^s(F(x)-G(x))_-dx is a non-decreasing function of s. Hence, it is differentiable almost everywhere over the set {s≤ t:Ts>-∞}. Differentiating equation (<ref>), one obtains: (t) ( F(Ts)-G(Ts) )_+ T(s)= ( F(s)-G(s) )_- for almost all s≤ t where T(s):= T^'(s) s ∈{s≤ t:Ts>-∞} 1 s ∈{s≤ t:Ts=-∞}. The equality (<ref>) can be verified as follows: if s ∈{s≤ t:Ts>-∞} then it holds immediately since T^'(s) exist for almost any s, if s ∈{s≤ t:Ts=-∞} then the right hand side (F(s)-G(s))_-=0 is equal to the left hand side since by definition (F(Ts)-G(Ts))_+=(F(-∞)-G(-∞))_+=0. Moreover, T is strictly increasing over {s:F(s)< G(s) F(Ts)≠G(Ts)} since T^'(s)=(F(s)-G(s))_-/(t)(F(Ts)-G(Ts))_+>0, a.e. Also, it can be verified from (<ref>) and (<ref>), T is constant (T^'(s)=0) over {s:F(s) ≥ G(s)}∩{s≤ t:Ts>-∞}. Now, we turn back to our main statement. In fact, we shall prove stronger positivity requirement ∫_-∞^s ( F(x)-G(x) ) d u(x)≥ 0 for all s≤ t. Note that, ∫_-∞^s (F(x)-G(x) )d u(x)= ∫_-∞^s (F(x)-G(x))_+ du(x)-∫_-∞^s (F(x)-G(x))_- du(x). Substituting (<ref>) in the integrand of the second integral on the right hand side (<ref>), we obtain ∫_-∞^s (F(x)-G(x)) d u(x)= ∫_-∞^s (F(x)-G(x))_+ du(x)- (t)∫_-∞^s (F(Tx)-G(Tx))_+ T(x) du(x). Recall that T is a non-decreasing function, and hence, the set 𝒳_1={x:T^'(x) does not exits and Tx>-∞} have Lebesgue measure zero. Let 𝒳_2={x:Tx = -∞ or T^'(x)=0}. Then, (t)(F(Tx)-G(Tx))_+ T(x)=0, ∀ x∈𝒳_2. Now, for every s ≤ t, let S={x:x≤ s}∩ ( 𝒳_1 ∪𝒳_2)^c. Intuitively, set S removes the constant parts and the jump discontinues of T, hence, T becomes invertible over S. Moreover, the following holds: (t)∫_-∞^s ( F(Tx)-G(Tx) ) _+ T(x) du(x)=(t)∫_S( F(Tx)-G(Tx))_+ T^'(x) du(x). Consider the term on the right side. Since the integrand is non-negative, Tx≤ x and, u∈𝒰^t_(t), we obtain (t)∫_S( F(Tx)-G(Tx))_+ T^'(x) du(x)≤∫_S(F(Tx)-G(Tx))_+ du(Tx). Recall that T is invertible over S. The change of variable z=Tx on the right hand side of (<ref>) gives: ∫_T(S)( F(z)-G(z) )_+du(z)≤∫_-∞^Ts(F(z)-G(z))_+du(z). Note that S excludes the points before the first crossing, the points where T is constant and the points T has upwards jumps, and thus, T(S)⊂ (-∞,Ts). Next, we are collecting the observation above back in (<ref>) to write ∫_-∞^s (F(x)-G(x)) d u(x)≥ ∫_-∞^s (F(x)-G(x))_+ du(x)-∫_-∞^Ts(F(x)-G(x))_+ du(x) = ∫_Ts^s(F(x)-G(x))_+ du(x) ≥ 0. When (t)=0, this implies that u ∈𝒰^t_(t) is a non-decreasing function over the interval (-∞,t) and becomes constant on the interval (t,∞). However, this scenario reduces to the first order stochastic dominance on the interval (-∞,t), for details see <cit.> . (i) Clearly, if the non-decreasing function ≡γ∈ [0,1] be constant, then 𝒰^mf_= 𝒰^*_γ given by relation (<ref>). In addition, for a non-decreasing function , the inclusions 𝒰^*_⊆ 𝒰^mf_ ⊆𝒰^*_ hold and, both inclusions being strict as soon as is a non-constant (see Theorem <ref>, part (a)). From the utility theory perspective, the second set inclusion means that the utility class 𝒰^mf_ provides us a more informative class compare to utility class 𝒰^*_. However, the first inclusion implies that the utility class 𝒰^mf_ represents the preferences of a broader class of decision makers. (ii) Due to the linear structure of 𝒰^mf_ and, for every t, observing that 𝒰^t_(t)⫋𝒰^*_(t), while multi-fractional (1+)-SD establishes stochastic dominance over 𝒰^mf_ that might possibly there is no dominance (in the sense of fractional dominance) in any 𝒰^*_γ for all γ< (also see Example <ref>). (iii) Therefore, for a given parameter γ, one can choose a function that satisfies (t) ≤γ for all t∈ to further expand the group of investors from 𝒰^*_γ to 𝒰^mf_. This will ensure that individuals in 𝒰^*_γ also agree on the ordering of random variables under (1+)-SD, since 𝒰^*_γ is a subset of 𝒰^mf_. Similarly, choosing a function with (t) ≥γ for all t∈ will result in a smaller set of decision makers, 𝒰^mf_⊂𝒰^*_γ, who unanimously agree on the ordering of random variables in the (1+γ)-SD sense. Hence, MFSD can be utilised to further interpolate between FSD and (1+γ)-SD by choosing a function that satisfies (t) ≤γ for all t∈, or between (1+γ)-SD and SSD by choosing a function that satisfies (t) ≥γ for all t∈. In this remark we discuss possible utility classes that might generate multi-fractional stochastic dominance. In order to do this, first, we adjust ourself within the unified approach introduced in <cit.>. Since, Proposition <ref> requires existence of the finite first moments, hence, a natural function b:→ [1,∞) to consider is given by b(s)=| s | +1. This means that the set ℬ_b consists of measurable functions with linear growth and _b is the set of all the probability measures on the real line with finite first moment. Let : → [0,1] be a non-decreasing function and consider the set 𝔉 = 𝒰^mf_∩ℬ_b (for every given arbitrary function , it is always possible to construct a function u ∈𝒰^mf_ which is not of linear growth). For every two F,G ∈_b define F ≼_𝔉 G if ∫ u dF ≤∫ u dG, for all u ∈𝔉. Clearly, 𝔉 is a convex cone containing the constant functions and being closed under the topology of pointwise convergence, hence, the maximal generator ℜ_𝔉 coincides with that of the generator 𝔉. Typically, larger generators (i.e. maximal generator) are favoured for applications. However, when it comes to studying the mathematical properties of stochastic dominance rules, a smaller set that produces the same ordering as the original generator is more desirable. There are two common approaches to construct smaller generators (a) smooth generator and (b) base generators. It is well-known that (see <cit.>) being closed under convolution is a sufficient condition for a stochastic integral dominance to admit a smooth generator. However, later on, we will show that the utility class 𝒰^mf_ is not closed under translations. Furthermore, according to <cit.>, each base (for definition, see Section 2.5, page 75, <cit.>) of 𝔉 also generates stochastic dominance ≼_𝔉. However, at this moment, it is not clear to us how to find such a base for given 𝔉 as above. It is worth to mention that the sets {1_[t,∞):t∈} and {(x-t)_-:t∈} constitute a basis for the first and second stochastic dominances respectfully. This can be demonstrated through their expectations, which can be expressed for each given t∈ as follows: _G[1_[t,∞)]-_F[1_[t,∞)]=F(t)-G(t)≥ 0, ∀ t ∈ and _G[(x-t)_-]-_F[(x-t)_-]= ∫_-∞^t [F(x)-G(x)]dx ≥ 0, ∀ t ∈. §.§ Examples Next, we continue with a few illustrating examples to gain better understanding of the utility class. (a) Assume that (t) =γ is a constant function. Then, we have, 𝒰^Union_ = 𝒰^*_γ ={ u:→ : 0 ≤γu(x_4) -u(x_3)/x_4-x_3≤u(x_2)-u(x_1)/x_2-x_1, ∀ x_1 < x_2 ≤ x_3 <x_4 }. coincides with the utility space 𝒰^*_γ introduced in <cit.>. This is a direct consequence of inclusion 𝒰^t_(t)⊆𝒰^*_γ for every t. (b) Let γ_1 < γ_2 be two real numbers belong to the interval (0,1). Consider the step function (t) = γ_1 t ≤ t_1 γ_2 t > t_1 We have, 𝒰^Union_ = 𝒰^*_γ_2∪𝒰^t_1_γ_1. (c) This time, let γ_1 < γ_2< γ_3 be three real numbers belong to the interval (0,1). Consider the step function (t) = γ_1 t ≤ t_1 γ_2 t_1 < t ≤ t_2 γ_3 t > t_2. Then, we have 𝒰^Union_= 𝒰^*_γ_3∪𝒰^t_2_γ_2∪𝒰^t_1_γ_1. This example highlights the local interpolation phenomenon that we previously discussed in Example <ref>, particularly within the structure of the corresponding utility class 𝒰^mf_. Specifically, for the function given as in (<ref>), the utility class 𝒰^t_1_γ_1 contains the most non-concave functions in 𝒰^Union_ that are captured by the fractional (1+γ_1)-SD over the interval (-∞,t_1]. Similarly, utility functions that belong to 𝒰^t_2_γ_2 display a higher degree of non-concavity than those in 𝒰^*_γ_3. Consequently, while (1+γ_2)-SD over (-∞,t_2] captures u ∈𝒰^t_2_γ_2, those u ∈𝒰^*_γ_3 corresponds to the somewhat weaker order (1+γ_3)-SD over the whole real line (-∞,∞). In general, we have the following: Let γ_1 < γ_2 <⋯ <γ_n be n real numbers belonging to the interval [0,1]. Consider function (t) = γ_1 t ≤ t_1 γ_2 t_1 < t ≤ t_2 ⋮ γ_n-1 t_n-2 < t ≤ t_n-1 γ_n t > t_n-1. Then, we have 𝒰^Union_ = ( ⋃_k=1^n-1𝒰^t_k_(t_k)) ⋃𝒰^+∞_ =( ⋃_k=1^n-1𝒰^t_k_γ_k) ⋃𝒰^*_γ_n. The purpose of this example is to provide yet another capability of multi-fractional stochastic dominance through its utility space. In certain occasions, one may require a given family of utility functions sharing particular features belongs to a single utility space corresponding to a given stochastic dominance rule. Next, we construct a parametrized family 𝒰_Θ of utility functions such that the only single utility space 𝒰^*_γ, containing 𝒰_Θ, associated to the fractional (1+γ) stochastic dominance corresponds to first order stochastic dominance, namely that γ=0 whereas one can capture 𝒰_Θ with a multi-fractional utility space 𝒰^mf_ that is a strict subset of the utility space 𝒰_FSD and hence resulting in a stochastic dominance rule in which a larger set of distribution functions can be ordered. Let θ^* >0 be fixed. Let Θ = (θ^*,∞) and consider a function : Θ→ (0,1] so that (i) it is non-decreasing, (ii) (θ) < θ for every θ∈Θ, and (iii) lim_θ→θ^* (θ) =0. For every θ∈Θ, we define a utility function u_θ via its right derivative as u'_θ (x) = 2 (θ) x ≤ (θ), 1 + (θ) (θ) < x ≤θ, (θ) x > θ. Then, it is simple to observe that u_θ∈𝒰^*_(θ) (see (<ref>) for definition) for each θ∈Θ. Let 𝒰_Θ : ={ u_θ : θ∈Θ}. Now, for a moment assume that for a given constant γ∈ [0,1] the set inclusion 𝒰_Θ⊆𝒰^*_γ holds. Then, property (iii) above dictates that γ =0 must be. In other words, the only possibility to capture all the utility functions u_θ in a single utility space 𝒰^*_γ associated to the fractional (1+γ)-SD corresponds to the first stochastic dominance. On the other hand, define the non-decreasing function (θ)= 0 θ≤θ^*, (θ) θ > θ^*. Now, we claim that 𝒰_Θ⊆𝒰^mf_⊂𝒰_FSD where clearly the second inclusion is strict. For each θ∈Θ, define utility functions v_θ and w_θ via their right derivatives as v'_θ (x) = (θ), and w'_θ (x) = (θ) x ≤(θ), 1 (θ) < x ≤θ, 0 x > θ. Then, clearly, v_θ, w_θ∈𝒰^mf_, and therefore, u_θ = v_θ + w_θ∈𝒰^mf_ for each θ in which proves the validity of the first inclusion. To finalize this example through visualization, we shall specify (θ). We set θ^*=1 and choose (θ)=1-e^(1-θ). It is apparent that it is a non-decreasing function, where 1-e^(1-θ)< θ for every θ∈Θ, lim_θ→θ^* 1-e^(1-θ)=0, and lim_θ→ 1 1-e^(1-θ)=1. Now, consider exponential random variable E ∼Exp(1) and let X = E+1 with distribution function given as F_X(θ) = (θ)= 0 θ < 1, 1-e^(1-θ) θ≥ 1. and u'_θ (x) = 2 (1-e^(1-θ)) x ≤ 1-e^(1-θ), 2-e^(1-θ) 1-e^(1-θ) < x ≤θ, 1-e^(1-θ) x > θ. § MFSD: ECONOMICAL ASPECTS Definition <ref> provides us with a rather abstract form for the utility class, making it rather challenging to determine which type of utility functions it encompasses and moreover how it should be understood within the context of utility theory. Therefore, in the current section, we introduce a larger (utility) space containing multi-fractional utility class and is particularly useful in the various decision making applications. §.§ Utility space 𝒰^cv_ First, we introduce a derivative-relation property that it's root goes to the basis utility functions u_t introduced in Lemma <ref>. [D()-property] Let :→[0,1] be a non-decreasing function. We say that a continuous function u:→ satisfies in D() property if either left or right one-sided derivatives exist at any point and, moreover 0 ≤ (y) u'_±(y) ≤ u'_±(x), ∀ x ≤ y. (Clarify that relation (<ref>) is a compact writing form of relations 0 ≤ (y) u'_-(y) ≤ u'_-(x), or 0 ≤ (y) u'_+(y) ≤ u'_+(x) for all x ≤ y depending on whether left or right derivatives are under consideration.) Also, we set 𝒰^cv_ := { u: → : D() property holds}. Before we continue, let us emphasize the following technical point. Given an arbitrary non-decreasing function , according to Proposition <ref> item (a) every function u∈𝒰^t_(t) has left and right derivatives at every point (and hence continuous) as soon as (t) ≠ 0. However, this is not true for all functions in 𝒰^mf_, and in fact, for the utility functions in ⋃_t∈ (-∞,t_0)𝒰^t_(t) where t_0 := sup{ t : (t)=0}, the existence of left and right derivatives at each point is not necessary guaranteed. Therefore, for the practical purposes, throughout this section, we make the convention that each function u∈𝒰^t_(t) with (t)=0 is either left or right differentiable at every point in (although, based on Proposition <ref>, we know that those utility functions are almost everywhere differentiable). While this assumption may not always hold in general, from a practical perspective, it does not affect the validity or usefulness of the forthcoming discussion. 𝒰^mf_⊆𝒰^cv_. Due to the linear structure of the elements in 𝒰^mf_, it is enough to show that 𝒰^t_(t), 𝒰^∞_⊆𝒰^cv_ for every real t. Let u ∈𝒰^t_(t). There are two possibilities: (i) (t)≠ 0. In this case, u admits one-sided derivatives at every point, and moreover, by very definition, we have 0≤ (t) u'_+(y) ≤ u'_+(x) for all x ≤ y. Since is non-decreasing, and u'_+(x)=0 for each x ≥ t, we have 0 ≤(y) u'_+(y) ≤ u'_+(x) for all x ≤ y. (ii) the argument for the case (t)=0 is the same as in (i) under Convention <ref>.The case u ∈𝒰^∞_ is similar. (i) Although, the space 𝒰^cv_ is the natural choice towards extension of the utility space 𝒰^*_γ corresponding to the fractional (1+γ)-SD to the context of the multi-fractional (1+)-SD, however, one has to note that, in general, 𝒰^cv_⊈𝒰^mf_. In fact, we constructed an insightful example (see Section <ref>) of a function u ∈𝒰^cv_ so that u ∉𝒰^mf_. (ii) Always 𝒰^cv_≠∅ is a non-empty set and in fact it includes every increasing globally concave function. Moreover, 𝒰^cv_ is a positive cone but it is not a ring under the composition operator (consider as multiplication). (iii) One advantage of Proposition <ref> is that it provides us with simple tools to check whether a given function u is not a utility function of multi-fractional stochastic dominance, although the question of finding easy-to-check tests to verify a given function is a utility function in the context of multi-fractional stochastic dominance remains valid. Later, in Section <ref> we will discuss and provide economical criteria (through the notion of the local greediness) to verify a given function u belongs to utility space 𝒰^mf_. (iv) In fact, a careful investigation of the basis utility functions u_t and the utility space 𝒰^t_(t) reveal that the desirable utility space consists of functions satisfying in a more restrictive 𝒟() property, namely that 0 ≤ (y) u'_+(y) ≤ u'_+(x), 0 ≤ (y) u'_-(y) ≤ u'_-(x), 0 ≤ (y) u'_-(y) ≤ u'_+(x), 0 ≤ (y) u'_+(y) ≤ u'_-(x), ∀ x < y 0 ≤ (y) u'_+(x) ≤ u'_-(x), ∀ x. However, the utility space consisting of those functions satisfying in relations (<ref>) is rather restrictive in order to pay the way to build a general theory. Hence, we will work within the extended framework of the utility space 𝒰^cv_. [Multi-fractional concave stochastic dominance (MFSD(cv))] Let :→ [0,1] be a non-decreasing function. Assume that X ∼ F and Y ∼ G. We say that G dominates F in the sense of concave multi-fractional (1+)-SD, denote by F ≼^mf-cv_(1+)-SD G if [ u(X)] ≤[ u(Y)], ∀ u ∈𝒰^cv_. (i) It is clear that 𝒰^*_⊆𝒰^mf_⊆𝒰^cv_⊆𝒰^*_. Hence, in general, multi-fractional concave SD is stronger than multi-fractional SD. In the terminologies of Section <ref>, we have the set inclusion E_mf-cv() ⊆E_mf (). (ii) Up to now, whether multi-fractional concave stochastic dominance admits an integral form remains as an open problem, and it is under further investigation. The purpose of this example is to provide a parametrized family 𝒰_Θ so that the only possibility that 𝒰_Θ⊆𝒰^*_γ for some constant γ∈ [0,1] is to have γ=0 (hence, corresponding to the utility class of the FSD). However, one can find a suitable non-decreasing function so that 𝒰_Θ⊆𝒰^cv_≠𝒰_FSD and more importantly 𝒰_Θ⊈𝒰^mf_. Let Θ=(0,1) and consider a function : Θ→ (0,1) with lim_θ→ 0(θ)=0. For every θ∈Θ, we define a utility function u_θ via its right derivative as u'_θ(x)= u_θ,1^'(x) -∞< x < θ, x θ≤ x ≤√(θ) , u_θ,2^'(x) √(θ)< x < ∞, where u_θ,1^' and u_θ,2^' are arbitrary non-negative and non-increasing functions, provided they meet the requirements of u_θ being continuous. Additionally, inf_x∈u_θ,1^'(x)=√(θ) while sup_x∈ u_θ,2^'(x)=θ. Thus, for a given θ, it is straightforward to see that u_θ∈𝒰^*_γ for some constant γ (i.e, we have 0 ≤γ u'_θ(y) ≤ u'_θ (x), ∀ x≤ y) if and only if the condition √(θ)≥γ holds true. Let 𝒰_Θ : ={ u_θ : θ∈Θ}. Now, observe that, for some constant γ, the validity of the set inclusion 𝒰_Θ⊆𝒰^*_γ dictates that γ=0 must be. This conclusion directly follows from (<ref>) when θ→ 0. Hence, the only scenario where u_θ for every θ∈Θ included within a single utility space associated with the fractional (1+γ)-SD corresponds to the utility class 𝒰_FSD = 𝒰^*_γ=0. Next, consider the continuos non-decreasing function (x) = 0 x < 0 , x 0 ≤ x ≤ 1, 1 x >1. We first show that 𝒰_Θ⊆𝒰^cv_. In order to do this, note that, for each θ∈Θ, the condition θ≥ u'_θ(y) (y) = y × y , ∀ y ∈ [θ, √(θ)] is both necessary and sufficient for the validity of u_θ∈𝒰^cv_ in which obviously holds true. Lastly, we show that 𝒰_Θ⊈𝒰^mf_. Fix θ∈Θ. First, when y=√(θ) and x=θ< y, it follows from (<ref>) that 1/(y)=1/√(θ)=u_θ'(y)/u_θ'(x). Next, we discuss that u_θ∉𝒰^Union_. Note that u_θ is not globally concave function and hence u_θ∉𝒰^+∞_ =1. Now, by contradiction, assume that there exists t∈ such that u_θ∈𝒰^t_(t). Therefore, based on the equality in (<ref>), this is only possible when t=√(θ). However, this leads to a contradiction, as the derivative of the subsequent concave segment u'_2,θ of u_θ is not necessarily equal to 0 after √(θ). Hence, u_θ can only belong to 𝒰^mf_ if it can be represented as a linear combination of functions that belong to 𝒰^Union_. For a moment, assume that u_θ = λ_1 v_1 + λ_2 v_2 where v_1 ∈𝒰^t = √(θ)_(t) =√(θ) and v_2 ∈𝒰^*_1. Then, from the mediant inequity (<ref>) we obtain u_θ'(√(θ))/u_θ'(θ)=λ_1v_1(√(θ))+λ_2v_2(√(θ))/λ_1v_1(θ)+λ_2v_2(θ)=λ_1√(θ)+λ_2v_2(√(θ))/λ_1θ+λ_2v_2(θ)<1/√(θ) since v_2(√(θ))/v_2(θ)≤ 1 for any v_2∈𝒰^*_1 by definition in which violates (<ref>). The general case can be discussed in a similar way relying on mediant inequality and the facts that any linear combination should contain at least one piece from the utility space 𝒰^√(θ)_√(θ) and another piece of 𝒰^*_1. §.§ Local Greediness In this section, we introduce the novel notion of local greediness of a utility function at an arbitrary given point x∈ that provides a meaning to the function from a decision-making perspective in the multi-fractional stochastic dominance framework. The notion of local greediness (local non-concavity) at the location x∈ characterizes the greediness behaviour of decision-makers over half-open intervals (x,∞) rather than the entire real line as in <cit.>. They define the index of global greediness (non-concavity) for a strictly increasing function u:→ as follows: G_u:=sup _x_1<x_2 ≤ x_3<x_4(u(x_4)-u(x_3)/x_4-x_3 / u(x_2)-u(x_1)/x_2-x_1). It is known that always G_u ≥ 1 and G_u =1 if and only if u is a concave function. Moreover, when u is differentiable, it is true that G_u = sup_x ≤ y u'(y)/ u'(x) (see <cit.>). In this regard, the index G_u globally determines the maximal magnitude of the increase in the slope of the utility function (marginal utility) over its entire domain. Let γ∈ [0,1] be a constant. Consider the utility class 𝒰^*_γ corresponding to fractional (1+γ)-SD. It can be easily seen that the parameter γ bounds the level of greediness (non-concavity) G_u for all u∈𝒰^*_γ through the inequality G_u≤ 1/γ. Inspired from <cit.>, we define a local version of the greediness index as the following. [Discrete local greediness] Let u be a strictly increasing function. The discrete local greediness at location x ∈ is defined as: G^d_loc(u;x):=sup_x < x_1<x_2≤ x_3<x_4(u(x_4)-u(x_3)/x_4-x_3/u(x_2)-u(x_1)/x_2-x_1). (i) In the context of decision theory, G_u serves as a global measure, capturing the maximal proportional increase in a decision maker's valuation of an additional cent as wealth transitions from lower to higher values. Hence, G_u takes into account all possible values of wealth. This makes it a natural notion for measuring the global greediness of a decision maker over . Building on the concept of G_u, for a fixed wealth level x∈, the measure G^d_loc(u;x) quantifies the maximum amplification of a decision maker's valuation of an extra cent within a specific region — specifically when their wealth exceeds x, i.e., for (x,∞)⊂. We direct the reader to <cit.> for related concepts of local greediness within the framework of cumulative prospect theory. (ii) The discrete local greediness G^d_loc can be well-defined for functions not necessary being strictly increasing. For example, it fully makes sense to talk about G^d_loc(u;·) for functions u ∈𝒰^cv_ which contains functions which allows for non strictly increasing segments. We will discuss it in a great details later in Remark <ref>. (a) Clearly, G^d_loc(u;·) ≤ G_u where the global index of the greediness G_u is given by (<ref>). (b) Function x ↦ G^d_loc(u;x) is decreasing, i.e., G^d_loc(u;x_2) ≤ G^d_loc(u;x_1) for x_1 ≤ x_2. Moreover, G^d_loc(u;x) → G_u as x → -∞. (c) For every x, G^d_loc(u;x) ≥ 1 and, G^d_loc(u;x)=1 if and only if u is concave on (x,∞). In addition, u is concave if and only if G^d_loc(u;x)=1 for every x. (d) Function x ↦ G^d_loc(u;x) is right continuous. Items (a) and (b) are obvious. Next, we prove (c). Since u is strictly increasing, hence for every given x, there is a point y = y_x ∈ (x ,∞) that u is differentiable at y. Hence, by letting x_1 → y^- and x_4 → y^+ we arrive to the claim. The validity of the second statement is just rely on the basic definition of concave function in terms monotonicity of quotients. Lastly, we show part (d). Fix x_0 ∈. Let x_n → x^+_0. Then, based on parts (b), (c) and moving to a subsequence, we have lim_n→∞ G^d_loc(u;x_n) ≤ G^d_loc(u;x_0). Now, let ε >0 be arbitrary. Then, there exist four points x_0 < x_1 < x_2 ≤ x_3 < x_4 such that G^d_loc (u;x_0 ) - ε < u(x_4)-u(x_3)/x_4-x_3/u(x_2)-u(x_1)/x_2-x_1. On the other hand side, there exists n_0 ∈ such that x_n < x_1 for every n ≥ n_0. Therefore, sup_n ≥ n_0 G^d_loc(u;x_n) ≥u(x_4)-u(x_3)/x_4-x_3/u(x_2)-u(x_1)/x_2-x_1. Hence, by part (b), we obtain lim_n→∞ G^d_loc(u;x_n) > G^d_loc (u;x_0 ) - ε. Letting ε→ we achieve the claim. Although the discrete local greediness G^d_loc provides us with a measure of greediness attached to the location however it is not well suited to work with functions of our interest in the space 𝒰^cv_. This is due to the fact that functions in the latter space are defined via properties of the one-sided derivatives. Hence, we take a different route to define a novel notion of the local nature which is more compatible with the 𝒰^cv_ structure. First, we need the following simple result taken from <cit.> that opens the door whenever continuity assumption is available. Let u is a continuous function with one-sided left derivatives u'_- exist at each point x ∈. Then, u is concave if and only if u_-'(x) ≥ u_-'(y), for all x≤ y. The similar conclusion holds true with the right derivative u'_+. Proposition <ref> propels us to define the following local measure of greediness that provides us with a continuous version of that given in Definition <ref>. [Continuous local greediness] Let u be a strictly increasing function with either left or right derivatives exist at each point x ∈. We define the continuous local greediness at location x as G^c_loc(u;x):= sup_x < y < zu'_±(z)/u'_±(y). (i) There is no ambiguity in Definition <ref> in the sense that when function u admits both left and right derivatives at every point, then one can straightforwardly show that the definition of continuous local greediness G^c_loc (u;·) is independent of the choice of one-sided derivatives. (ii) One has to note that the continuous local greediness is invariant under any linear transformation. Now, we are ready to state that two notions of discrete and continuous local greediness coincide in our framework. Let u be a continuous, strictly increasing and either left or right derivatives exist at each point. Then G^d_loc(u,x)=G^c_loc(u,x), for all x ∈. It is clear that G^c_loc(u;x)≤ G^d_loc(u;x). Thus, to complete the proof, it remains to show G^c_loc(u;x)≥ G^d_loc(u;x). We assume that u is right differentiable at any points. The case of left differentiable is the same. Let ε>0. Then, by the definition of G^d_loc(u,x), there exist four points, x ≤ x_1<x_2≤ x_3<x_4 such that G:=(u(x_4)-u(x_3)/x_4-x_3/u(x_2)-u(x_1)/x_2-x_1) > G^d_loc(u;x)-ε. On the other hand, by Theorem <ref>, there exist c_1, c_2∈ (x_1,x_2) and c_3,c_4 ∈ (x_3,x_4) such that u'_+(c_1)≤u(x_2)-u(x_1)/x_2-x_1≤ u'_+(c_2) and u'_+(c_3)≤u(x_4)-u(x_3)/x_4-x_3≤ u'_+(c_4) Hence, G^c_loc(u;x)≥sup_x < y<zu'_+(z)/u'_+(y)≥u'_+(c_4)/u'_+(c_1)≥ G > G^d_loc(u,x)-ε. By letting ε→ 0, we complete the proof. Hereafter, in our framework, Proposition <ref> allows us to use notation G_loc(u;·) interchangeably without any ambiguity. Let u_1,…,u_n be n strictly increasing continuous functions with either left or right derivatives exist at each point x ∈. Then, for non-negative constants λ_1,…, λ_n, the following statements hold: (a) For every x, we have G_loc (λ_1 u_1+ ⋯+λ_n u_n ;x) ≤max_1 ≤ k ≤ n G_loc(u_k;x). (b) For every x, G_loc(·;x) is a sub-linear functional in the sense that G_loc (λ_1 u_1+ ⋯+λ_n u_n;x) ≤∑_k=1^n G_loc(u_k;x). (a) It is a direct application of the generalized mediant inequality, see Lemma <ref>. (b) It is also a direct application of part (a) and the fact that max{ a,b }≤ a+b for every two non-negative real numbers. It is worth to highlight that one can talk about the both notions of local greediness in the context of the (utility) space 𝒰^cv_ where in general the assumption of strict increasing may lost. For example, note that from the set condition 0≤ (y) u_±' (y) ≤ u_±'(x), ∀ x≤ y one can easily see that if u_±'(x)=0 then u_±'(y)=0 for all y>x. Therefore, when considering the ratio u_±'(y)/u_±'(x), x≤ y, the only problematic scenario we may encounter is when we have a 0/0 case. Hence, the convention 0/0=1 removes all the ambiguities. However, this convention does not affect the measurement of non-concavity, as the lower bound, 1 is reached if (only if) the function is concave and constant functions are concave. In addition, due to convention above, Proposition <ref> remains valid for the functions chosen from the utility space 𝒰^cv_. We will make use of this several instance later on in Section <ref>. Let u ∈𝒰^cv_. Then, for every x, G^d_loc(u;x)=G^c_loc(u;x)=: G_loc(u;x) and moreover, G_loc (u;x) ≤1/(x). The bound in (<ref>) follows directly from the set condition (<ref>) and the fact that is a non-decreasing function. (i) In contrast to fractional (1+γ)-SD, the multi-fractional (1+)-SD provides local upper bounds to the steepness of the convex segments depending on the location, let's x. Under the assumption that is a non-decreasing, the utility function u ∈𝒰^mf_ (and hence u ∈𝒰^cv_) is allowed to have less steeper convex segments as one moves to the right hand side of the real axis. This is perfectly in line with the local interpolation property of MFSD (see Example <ref>). Essentially, the non-decreasing function enforces a stronger dominance (closer to FSD) on the lower values of the real line, while the dominance progressively weakens (closer to SSD) as we move towards the higher values on the real line. (ii)Analogous to the fact that fractional (1+)-SD controls the global index of greediness (see (<ref>)), in our context, the multi-fractional (1+)-SD controls from above the local greediness according to Corollary <ref>. (iii) The multi-fractional (1+)-SD more informatively (locally) bounds the size of upward jumps in the derivative u^' (of the utility function u ∈𝒰^mf_) at kink points based on the position of the kink. For example, let x^*_1<x^*_2 be two points where u has kinks, then u_+^'(x^*_1)/u_-^'(x^*_1)≤1/(x^*_1) and u_+^'(x^*_2)/u_-^'(x^*_2)≤1/(x^*_2), where we have (x^*_2) ≥(x^*_1). Such a feature is not available in the context of fractional (1+γ)-SD and the ration 1/γ provides an universal bound independent of the location of kinks. Therefore, in summary, we can conclude that the MFSD approach makes use of the additional information about the location of the non-concave segments in the utility function u to carry out a more precise interpolation between first and second-order stochastic dominance rules. This, in turn, results in a more informative decision making rule. We end this section with the introducing the greediness metric on the utility space 𝒰^cv_. We expect that such metric provides us with new machinery to study multi-fractional utility functions. This subject is under further investigation. Let u,v ∈𝒰^cv_. We define the greediness distance d_G as follows: d_G (u,v) : = sup_x ∈| G_loc(u;x) - G_loc(v;x) |. (𝒰^cv_, d_G) is a pseudometric space. Clearly, d_G(u,v) ≥0, d_G(u,u)=0 and d_G(u,v)= d_G(v,u). Also, the triangular inequality d_G(u,v) ≤ d_G(u,w) + d_G(w,v) is a direct consequence of the triangular inequality on the real line. Lastly, one has to clearly note that d_G(u,v)=0 does not necessary imply that u=v (a simple scenario is to take v(x) = u(L(x)) where L is a any linear transformation). §.§ An illustrating (counter) example In this section, we construct a non-decreasing function and a function u∈𝒰^cv_ so that u ∉𝒰^mf_. In addition, we build two distribution functions F and G so that for the constructed function u the non-negativity requirement _G[u] - _F[u] ≥ 0 fails. This observation has a direct consequence that the class 𝒰^cv_ cannot serve (although as one may naturally expect) as a utility class for the multi-fractional (1+)-SD. Consider the following non-decreasing step function (x) = 0 x < 0, 1/4 0≤ x ≤ 2/4, 2/3 2/4 < x ≤ 1, 1 1 < x, and the distribution functions: F (x) = 0 x < 0, 4/10 0≤ x <2/4, 7/10 2/4 ≤ x < 1 , 1 1 ≤ x, and G(x) = 0 x <1/4, 5/10 1/4≤ x <3/4, 1 3/4 ≤ x , The graphical representations of F, G, and are given below: It is straightforward to check that F ≼^mf_ (1+ )-SD G, i.e., the integral condition (<ref>) holds. In fact, F and G have crossing points at 1/4, 2/4 and 3/4 and the following integral condition is in place ∫_1/4^2/4( F(x) - G(x) )_- dx = ∫_1/4^2/4 (1/10) dx ≤1/4_(2/4)∫_0^1/4 4/10 dx = 1/4 ∫_0^1/4( F(x) - G(x) )_+dx ∫_0^1( F(x) - G(x) )_- dx = ∫_1/4^2/4 1/10 dx+∫_3/4^1 3/10 dx ≤(1) ∫_0^1( F(x) - G(x) )_+dx = 2/3 {∫_0^1/4 4/10 dx + ∫_2/4^3/4 2/10 dx } . Next consider the following continuous function defined through its right derivative: u^'(x)= c 0≤ x <1/4, 4c 1/4≤ x < 2/4 , c 2/4 ≤ x < 3/4, 3c/2 3/4 ≤ x ≤ 1 where c can be taken any positive real number (its value is immaterial). Note that u satisfies the condition (y) u'(y) ≤ u'(x), for every x≤ y, where hereafter all the derivatives stand for the right derivative. To see this, observe that, (i) If 0≤ x <1/4, then we have u^'(x)/u^'(y)= 1 0≤ y <1/4, 1/4 1/4≤ y < 2/4 , 1 2/4 ≤ y < 3/4, 2/3 3/4 ≤ y ≤ 1 therefore, u^'(x)/u^'(y)≥ 1/4 on [0,2/4) and, u^'(x)/u^'(y)≥ 2/3 on [2/4,1]. (ii) If 1/4≤ x <2/4, then it is straightforward to see that the condition holds since 4c>c and 4c>3c/2. (iii) If 2/4≤ x <3/4, then u^'(x)/u^'(y)= 1 2/4 ≤ y < 3/4, 2/3 3/4 ≤ y ≤ 1 thus, u^'(x)/u^'(y)≥ 2/3 on [2/4,1]. However, we have _G[u]-_F[u]= c[∫_0^1/4 4/10 dx+ ∫_2/4^3/4 2/10 dx-4∫_1/4^2/4 1/10 dx - 3/2 ∫_3/4^1 3/10 dx ] = -c/16 <0. The reason is that although function u is continuous and it is differentiable everywhere except countably many points, it cannot be written as a non-negative linear combination of the elements in 𝒰^Union_. To see this first note that 𝒰^Union_=𝒰^t=2/4_(2/4)=1/4∪𝒰^+∞_ = 𝒰^2/4_1/4∪𝒰_2/3. Then, clearly u ∉𝒰^2/4_1/4 since u^'(x) ≠ 0 for x>2/4. Also, u ∉𝒰_2/3, since u^'(x)/u^'(y)= 1/4 ≱2/3 on [0,2/4). Thus, u∉𝒰^Union_. Next, we show that u ∉𝒰^mf_. For now, let's assume that u = λ_1 u_1 + λ_2 u_2 where (without loss of generality) u_1 ∈𝒰^2/4_1/4 and u_2 ∈𝒰_2/3 and λ_1, λ_2 >0. Then by the linearity of differentiation we have the following system of equations: u^'(x)/u^'(y) =1/4=λ_1 u_1^'(x) + λ_2u_2^'(x)/λ_1 u_1^'(y) + λ_2 u_2^'(y), x <1/4≤ y <2/4 u^'(x)/u^'(y) =4=λ_1 u_1^'(x) + λ_2u_2^'(x)/λ_2 u_2^'(y), 1/4 ≤ x<2/4≤ y<3/4 u^'(x)/u^'(y) =2/3=u_2^'(x)/u_2^'(y), 2/4 ≤ x <3/4 ≤ y. Let x≤ y <2/4. If u_1^'(x)/ u_1^'(y)≠ u_2^'(x)/ u_2^'(y), then by the mediant inequality (see Lemma <ref>), we have λ_1 u_1^'(x) + λ_2u_2^'(x)/λ_1 u_1^'(y) + λ_2 u_2^'(y)>min{ u_1^'(x)/ u_1^'(y), u_2^'(x)/ u_2^'(y)}≥1/4, and otherwise, if u_1^'(x)/ u_1^'(y) = u_2^'(x)/ u_2^'(y), then the mediant inequality yields that u_1^'(x)/ u_1^'(y) = u_2^'(x)/ u_2^'(y) = λ_1 u_1^'(x) + λ_2u_2^'(x)/λ_1 u_1^'(y) + λ_2 u_2^'(y) =1/4 which is a contradiction with the fact that u_2 ∈𝒰_2/3. Later on in Section <ref>, Theorem <ref>, we provide an elegant device that allows one to check whether a given function u∈𝒰^mf_. (Would the continuity assumption on help?) It is natural that one think of that the continuity assumption on the function would eliminate the pathological members of 𝒰^cv_ such as u given in (<ref>) that results to negative expected utilities to achieve the set equality 𝒰^cv_ = 𝒰^mf_. However, the short answer is no. In fact, the properties of only matters on the negative intervals i.e., {x:G(x)≥ F(x) }. Hence, as long as the is kept same on [1/4,2/4] and [3/4,1] replacing with any arbitrary function such as the following, would not be enough to remove (<ref>) from the class 𝒰^cv_. Thus, one can conclude that the space 𝒰^cv_ even cannot serve as a utility class for the multi-fractional (1+)-SD under the extra continuity assumptions on . In this remark, we discuss a fruitful observation that possibly every function u ∈𝒰^cv_ can be brought as closely as possible to a function in the utility space 𝒰^mf_ in terms of the local greediness metric. More precisely, for every given ϵ>0, we construct a function w_ϵ∈𝒰^mf_ so that for the function u ∈𝒰^cv_∖𝒰^mf_ given as in (<ref>) we have d_G (u, w_ϵ) < ϵ. To this end, first we compute G_loc (u;x) from (<ref>) as the following, G_loc (u;x)= 4 0≤ x <1/4, 3/2 1/4≤ x < 2/4 , 3/2 2/4 ≤ x < 3/4, 1 3/4 ≤ x ≤ 1. As a building blocks of such w_ϵ∈𝒰^mf_ we introduce the following functions: u_1^'(x)= c 0≤ x <1/4, 4c 1/4≤ x < 2/4 , 0 2/4 ≤ x < 3/4, 0 3/4 ≤ x ≤ 1 and u_2^'(x)= c 0≤ x <1/4, 3c/2 1/4≤ x < 2/4 , c 2/4 ≤ x < 3/4, 3c/2 3/4 ≤ x ≤ 1. Clearly, u_1∈𝒰^2/4_1/4 with _G[u_1]-_F[u_1]=0 and u_2∈𝒰_2/3 with _G[u_2]-_F[u_2]=0. Let w(x)=λ_1 u_1(x)+λ_2 u_2(x) ∈𝒰^mf_, thus by the linearity of differentiation we have w^'(x)=λ_1c+λ_2c 0≤ x <1/4, λ_14c +λ_23c/2 1/4≤ x < 2/4 , c 2/4 ≤ x < 3/4, 3c/2 3/4 ≤ x ≤ 1 and similarly by the linearity of expectation, we have _G[w]-_F[w]=_G[λ_1 u_1+λ_2 u_2]-_F[λ_1 u_1+λ_2 u_2] = 0 ∀λ_1, λ_2 ≥ 0 as expected. To see the differences between w and u, we examine the following values of G_loc (w;x) : (λ_2 ≠ 0) G_loc (w;x)=λ_1 4c + λ_2 3c/2/λ_1 c + λ_2 c 0≤ x <1/4, 3/2 1/4≤ x < 2/4 , 3/2 2/4 ≤ x < 3/4, 1 3/4 ≤ x ≤ 1. The computation above can be easily seen from below. Recalling the values of w^'(x) from (<ref>) we have G_loc (w;x)=max[1, λ_1 4c + λ_2 3c/2/λ_1 c + λ_2 c, c/λ_1 c + λ_2 c, 3c/2/λ_1 c + λ_2 c]=λ_1 4c + λ_2 3c/2/λ_1 c + λ_2 c 0≤ x <1/4, max[3/2, c/λ_1 4c + λ_2 3c/2,3c/2/λ_1 4c + λ_2 3c/2]=3/2 1/4≤ x < 2/4. Now, note that, by simple computation (or by the mediant inequality (<ref>)), for λ_1,λ_2>0 we have G_loc (w;x)=λ_1 4 + λ_2 3/2/λ_1 + λ_2 <4=G_loc (u;x), x∈ [0,1/4) and G_loc (w;x)= G_loc (u;x) for all x∈ [1/4,1]. Since, G_loc(w;x) = 4 λ_1 + 3/2 λ_2 /λ_1 + λ_2 is independent of x ∈ [0,1/4) and it converges to 4 as λ_1 → +∞ (λ_2 >0 being fixed), hence the claim follows. §.§ 𝒰^mf_ and the local greediness In this section, we provide several tools in terms of the local greediness G_loc(u;·) to verify a given function u ∈𝒰^cv_ belongs to the multi-fractional utility space 𝒰^mf_. The main inspiration source is the example given in Section <ref>. Let :→ [0,1] be an arbitrary non-decreasing function. Let u=∑_k=1^nλ_k u_k ∈𝒰^mf_, n∈, λ_k ≥ 0 and u_k ∈𝒰^Union_, k=1,...,n. Assume that there exists x_0 ∈ such that (x_0) ≠ 0 and moreover G_loc(u;x_0) = 1/(x_0). Then, there exists an index 1 ≤ j ≤ n so that G_loc(u;x_0) = G_loc(u_j;x_0) =max_1 ≤ k≤ n G_loc(u_k;x_0) =1/(x_0). Recall that by Proposition <ref> and Corollary <ref> we have G_loc(u;x_0)≤max_1 ≤ k ≤ n G_loc(u_k;x_0) and G_loc(u_k;x_0) ≤1/(x_0) for each k=1,...,n. Now, relation (<ref>) yields that 1/(x_0) = G_loc(u;x_0) ≤max_1 ≤ k ≤ n G_loc(u_k,x_0) ≤1/(x_0). Hence, the result follows. Let :→ [0,1] be an arbitrary non-decreasing function. Assume that u ∈𝒰^mf_ and there exists x_0 ∈ such that (x_0) ≠ 0 and moreover G_loc(u;x_0) = 1/(x_0). Then, there exists an open interval I=I_x_0 = (x_0,x_0+δ) for some δ >0 such that is a constant function on I. If (x_0)=1, then it is obvious that is a constant function over the interval (x_0,∞). Hence, hereafter, we assume that (x_0)<1. Next, let u takes the form u=∑_k=1^nλ_k u_k ∈𝒰^mf_, n∈, λ_k ≥ 0 and u_k ∈𝒰^Union_, k=1,...,n. Now, by Lemma <ref>, there exists an index 1 ≤ j ≤ n so that 1/(x_0) = G_loc(u;x_0) = G_loc(u_j,x_0). Recall that u_j ∈𝒰^Union_, so there exists t∈ [- ∞,∞] with u_j ∈𝒰^t_. The case t=-∞ obviously not possible. If t=∞, then clearly is a constant function over the interval (x_0,∞). Now, let t ∈. If t ≤ x_0, then we must have G_loc(u_j;x_0)=1 that contradicts our assumption. Therefore, t> x_0 must be. By definition, we have 1/(x_0) = G_loc(u_j;x_0) ≤1/(t)≤1/(x_0). Since, is non-decreasing, therefore must be a constant function over the interval (x_0,t), hence the claims follows. Let :→ [0,1] be an arbitrary non-decreasing function such that there exists an open subset U ⊆ where is strictly increasing on U. Assume that u ∈𝒰^cv_ and there exists x_0 ∈ U such that G_loc(u; x_0)=1/(x_0). Then, u ∉𝒰^mf_. Let :→ [0,1] be an arbitrary non-decreasing function. Assume that u ∈𝒰^cv_ and there exist x_0, x_1 ∈ such that 0 < (x_0) ≠(x_1)<1. Let G_loc(u;x_k) = 1/(x_k), k=0,1. Then, u ∉𝒰^mf_. We proceed by contradiction. Assume that u ∈𝒰^mf_ takes the form, u = ∑_k=1^nλ_k u_k, λ_k ≥ 0, u_k ∈𝒰_^Union , n ∈. Now, Lemma <ref> implies that there at least exist two indexes 1 ≤ j_0, j_1 ≤ n, j_0 ≠ j_1 such that G_loc(u_j_k,x_k)=1/(x_k), k=0,1. Let's assume that x_0 < x_1 (hence, by our assumption (x_0) < (x_1)). Let A_0 = { 1≤ k ≤ n : G_loc (u_k;x_0)= 1/(x_0)}. Note that j_0 ∈ A_0 and let ℰ_0 = {ε >0 : G_loc(u_j_0,x_0)=1/(x_0)> max_ k ∈ A^c_0G_loc(u_k,x_0) + ε}. Next, for ε∈ℰ_0, by definition, there exist four points with x_0 < x^ε_1<x^ε_2≤ x^ε_3<x^ε_4 such that u(x^ε_4)-u(x^ε_3)/x^ε_4-x^ε_3/u(x^ε_2)-u(x^ε_1)/x^ε_2-x^ε_1 =:u[x^ε_4,x^ε_3]/u[x^ε_2,x^ε_1]>G_loc(u,x_0)_=1/(x_0)(By assumption) - ε > max_k ∈ A^c_0G_loc(u_k,x_0)≥u_k[x^ε_4,x^ε_3]/u_k[x^ε_2,x^ε_1] for every k∈ A^c_0. This means that (with x̅^ε = (x^ε_1,x^ε_2,x^ε_3,x^ε_4)), u[x̅^ε]:= u[x^ε_4,x^ε_3]/u[x^ε_2,x^ε_1]=∑_k=1^nλ_k u_k[x^ε_4,x^ε_3]/∑_k=1^nλ_k u_k[x^ε_2,x^ε_1] > u_k[ x̅^ε ], ∀ k∈ A^c_0. On the other hand, by the mediant inequality (Lemma <ref>) we have u_k [x̅^ε] > u [x̅^ε], k ∈ A_0. This leads to the following final inequality that for every index k ∈ A_0, 1/(x_0)≥ u_k [x̅^ε] > u [x̅^ε]> G_loc(u;x_0) - ε, Let ε→ 0 to obtain that lim_ε→ 0 u_k [x̅^ε] = lim_ε→ 0 u [x̅^ε] = 1/(x_0), k ∈ A_0. Let u_A_0 = ∑_k ∈ A_0λ_k u_k, and similarly u_A^c_0 = ∑_k ∈ A^c_0λ_k u_k. First, by the mediant inequality lim_ε→ 0 u_A_0 [x̅^ε] = 1/(x_0). This, implies that, 1/(x_0) = lim_ε→ 0 u_A_0 [x̅^ε] = lim_ε→ 0 u [ x̅^ε]= lim_ε→ 0 u_A_0[x^ε_4,x^ε_3] + u_A^c_0 [x^ε_4,x^ε_3] /u_A_0[x^ε_2,x^ε_1] + u_A^c_0 [x^ε_2,x^ε_1] = lim_ε→ 0[ u_A_0[x̅^ϵ] ( u_A_0[x^ε_2,x^ε_1]/u_A_0[x^ε_2,x^ε_1] + u_A^c_0 [x^ε_2,x^ε_1] ) + u_A^c_0[x̅^ϵ] ( u_A^c_0[x^ε_2,x^ε_1]/u_A_0[x^ε_2,x^ε_1] + u_A^c_0 [x^ε_2,x^ε_1] ) ] First, note that all the four terms in the RHS above are bounded sequences, and hence after passing to a suitable subsequence we arrive into 1/(x_0) = 1/(x_0)α_1 + βα_2 where we know that 0 ≤β < 1/(x_0), α_1, α_2 ∈ [0,1] with α_1 + α_2 =1, and β : = lim_ε→ u_A^c_0 [x̅^ε]. First, we show that β≠ 0. By contradiction, assume that β =0. First, we claim that all the x^ε_4 < x_1 for sufficiently small ε. Recall that we have G_loc(u;x_0) = max_k G_loc(u_k;x_0) = G_loc(u_j_0;x_0). Assume that u_j_0∈𝒰^t_ (t). If t=∞, this immediately implies that (x_0) = that contradict (x_0) < (x_1). So t<∞. Now if t > x_1, then we would have G_loc(u_j_0;x_0) ≤1/(t) < 1/(x_1) < 1/(x_0) that's again provides us with a contradiction. Therefore, t < x_1. If this is the case that x^ε_4 < x_1, we are done otherwise since u_j_0 is a flat function after t one can let x^ε_4 =t. If x^ε_3 >t for infinitely many, then this implies that G_loc(u_j_0;x_0)=1 that contradicts. Therefore, we have x^ε_4 ≤ t for sufficiently small ε. Now, assumption β=0 yields that u_j_1[x^ε_4,x^ε_3] → 0. So, G_loc(u_j_1;x_1)=1 that contradicts. Now, in order to have (<ref>), we must have α_2 =0. This immediately implies that u_j_1[x^ε_2,x^ε_1] → 0. Now, since (x^ε_1,x^ε_2) is a bounded sequence, after passing to yet another suitable subsequence, we can assume that (x^ε_1,x^ε_2) → (x^0_1,x^0_2) as ε→ 0. i) If x^0_2=x^0_1=x^0, then lim_ε→ 0u_j_1[x^ε_2,x^ε_1]=(u_j_1)'_+(x^0)=0 ii) If x^0_2>x^0_1 then lim_ε→ 0u_j_1[x^ε_2,x^ε_1]= u_j_1(x^0_2)-u_j_1(x^0_1)=0 In both cases, it implies that u_j_1 is a constant function after x_1. This, in turn, implies that G_loc(u_j_1;x_1)=1 that provides us with contradiction. This concludes the proof. Let :→ [0,1] be a piecewise step function of the form given in (<ref>). Assume that u ∈𝒰^cv_ and there exist x_i ∈ (t_i-1,t_i) and x_j ∈ (t_j-1,t_j), where 1 ≤ i ≠ j ≤ n such that G_loc (u;x_k) = 1/(x_k), k=i,j. Then, u∉𝒰^mf_. §.§ Average risk aversion The 𝒟() property (<ref>) can be interpreted as a local lower bound to the average risk aversion (or upper bound to risk lovingness) behavior of a decision maker. Let consider strictly increasing twice differentiable utility functions u∈𝒰^cv_. The so-called Arrow–Pratt measure of absolute risk aversion (ARA) coefficient is defined via: r_u(x) = r(x): =-u^''(x)/u^'(x), x∈. Then, one can easily deduce a local lower bounds on the average absolute risk aversion coefficient as ∫_x^y -u^''(z)/u^'(z)dz= [-ln(u^'(y))+ln(u^'(x))]=ln(u^'(x)/u^'(y)) ≥ln((y)), ∀ x≤ y. When (y)=0, the above estimate does not provide any insight as the quantity ln ((y)) = -∞. Obviously, when u is fully concave the latter inequality always is in place (one side is negative and the other side is positive). However, in more interesting cases (namely when utility function u allows for some local non-concavity), the relation (<ref>) indicates that function controls (on average) how much function u can deviate of being locally risk-averse. Hence, for lower values of wealth y, the multi-fractional utility space 𝒰^mf_ allows the utility functions to represent more risk loving behaviour due to the non-decreasing property of . This is consistent with the Friedman-Savage <cit.> and Markowitz <cit.> model where investors are more risk averse (buying insurance) when they have more wealth than their present wealth. Looking at it from the viewpoint of decision theory, this implies that MFSD orders distributions such that not only do all risk-averse decision makers agree on the ordering, but also less risk-averse ones, who are willing to take some risks based on their wealth level. Thus, the value of (y) can be interpreted as the smallest average risk aversion degree (or maximum risk lovingness) at y, required to ensure agreement on the ordering between risk-averse and those who are risk loving to some extent. Another interesting observation is that when function is continuous and by a direct application of the mean value theorem, in addition, we can observe that: for every x and every y in a tiny neighbourhood of x that log( (x) ) /y-x⪅ r(x), x ≠ y. The latter heuristic observation interprets that function at each instance controls the non-concavity of utility function u through the ARA coefficient. Similarly, one can discuss the relative absolute risk aversion coefficient R_u(x)=-xu^''(x)/u^'(x) as well. § MFSD: FURTHER MATHEMATICAL ASPECTS In this section, we gather several basic properties of multi-fractional stochastic dominance. First, we investigate a few interesting properties of the utility class u∈𝒰^mf_. Let : ℝ→[0,1] be an arbitrary non-decreasing function. Then utility class 𝒰^mf_ enjoys the following properties: (a) any function u ∈𝒰^mf_ is increasing. In addition, it contains all linear increasing functions. (b) it is invariant under positive translations provided if u∈𝒰^mf_ and c≥ 0 then u_c∈𝒰^mf_ where u_c(x)=u( x+c). It is known that , for any γ∈ [0,1], the (1+γ) fractional stochastic dominance is invariant under translation and positive scaling operations. However, one has note that multi-fractional stochastic dominance utility function space just enjoys positive translation invariance and not being invariant under scaling operation although some utility functions in multi-fractional stochastic dominance may enjoy those invariant properties. For example, if for a given non-decreasing function one picks a utility function u ∈𝒰^+∞_ then, both u_c, u_α∈𝒰^+∞_ for every c and every α >0 where u_α (x) = u(α x). Let : ℝ→[0,1] be a non-decreasing function. Assume that X∼ F and Y∼ G are two arbitrary distributions. Then, the following statements are in order: (a) Let assume that supp(F)=[a_F, b_F] and supp(G)=[a_G,b_G] where a_F, a_G ∈ and b_F,b_G ∈∪{∞}. Then, F ≼^mf_ (1+ )-SD G implies that a_F ≤ a_G. (b) If F ≼^mf_ (1+ )-SD G, then μ_G ≥μ_F, where μ_F = ∫_ x dF(x). (c) Let _1,_2: ℝ→[0,1] be non-decreasing functions such that _1(t)≤_2(t) for all t∈. Then, relation F ≼^mf_ (1+ _1)-SD G implies that F ≼^mf_ (1+ _2)-SD G. (d) If a ≤ b then δ_a≼^mf_ (1+ )-SDδ_b. (e) For all c≥ 0, X≼^mf_ (1+ )-SD X+c. (a) Obvious. (b) If F ≼^mf_ (1+ )-SD G, then by letting as t →∞ the integral condition <ref>, we obtain that ∫_-∞^∞ (F(x)-G(x))_+dx - ∫_-∞^∞ (F(x)-G(x))_-dx ≥ 0. Since ≤ 1, the latter inequality directly implies that ∫_-∞^∞ (F(x)-G(x))_+dx - ∫_-∞^∞ (F(x)-G(x))_-dx ≥ 0. That is to say μ_G - μ_F ≥ 0. (c) Obvious. (d) If a≤ b, we have δ_a≼_FSDδ_b. Thus the claim. (e) For every non-negative constant c, random variable X+c has cumulative distribution function F_c, defined by F_c(t) := F(t-c). Since F is a non-decreasing function, it follows that F(t) ≥ F(t-c) for all t ∈ℝ, hence F ≼_FSD G and obviously F ≼^mf_ (1+ )-SD F_c. It is worth to point out that for any non-decreasing functions _1,_2: ℝ→[0,1] with _1(t)≤_2(t) for every t ∈, we have 𝒰^mf__2 ⊆𝒰^mf__1 E_mf(_2) ⊇E_mf(_1). If in addition assume that _1 and _2 are distribution functions such that _2 ≼_FSD_1, then we have F ≼_(1+_1)-SD G implies that F ≼_(1+_2)-SD G. For an arbitrary non-decreasing function , the multi-fractional (1+)-SD is closed under the following features: (a) positive location transformations: for all c≥ 0, X ≼^mf_ (1+ )-SD Y implies that X+c ≼^mf_ (1+ )-SD Y+c (b) convolutions with non-negative, independent random variables: X ≼^mf_ (1+ )-SD Y implies that X+Z ≼^mf_ (1+ )-SD Y+Z where Z is non-negative and independent of X and Y. (c) mixing: [X|Θ=θ]≼^mf_ (1+ )-SD [Y|Θ=θ], ∀θ∈supp(Θ) implies that X ≼^mf_ (1+ )-SD Y. (d) Let (_n:n≥ 1) be a sequence of non-decreasing functions so that _n → pointwise as n →∞. Then, is non-decreasing and, moreover X ≼^mf_ (1+ _n)-SD Y, for every n ∈ℕ implies that X ≼^mf_ (1+ )-SD Y. (e) Let : ℝ→[0,1] be a non-decreasing function (hence integrable). For each t, let (t):=∫_-∞^t (s) ds (note that is a non-decreasing function as well). Then, X ≼^mf_ (1+ )-SD Y implies that X ≼^mf_ (1+ )-SD Y. (a) This is a direct application of part (b) of Proposition <ref>. (b) If X ≼^mf_ (1+ )-SD Y, then by part (a) we have [X+Z|Z=z] ≼^mf_ (1+ )-SD [Y+Z|Z=z] holds for all z ∈supp(Z) ⊆ [0,∞). Hence, [u(X+Z)|Z=z]≤[u(Y+Z)|Z=z], ∀ u∈𝒰^mf_. Then, for any u∈𝒰^mf_ we have [u(X+Z)]=[[u(X+Z)|Z=z]]≤[[u(Y+Z)|Z=z]]=[u(Y+Z)]. (c) Similarly, for any u∈𝒰^mf_ we have 𝔼[u(X)]=𝔼[𝔼[u(X) |Θ]] ≤𝔼[𝔼[u(Y) |Θ]]=𝔼[u(Y)]. (d) This is just a direct application od the integral condition (<ref>). (e) It is classical that every non-decreasing function can be approximated piecewise with an increasing sequence of steps functions. Now, use part (d). Inspired by Theorem 4.8 in <cit.> we provide the following theorem which extends the invariance property to more general transformations. If X ≼^mf_ (1+ ×γ)-SD Y for a non-decreasing function : ℝ→[0,1] and the constant γ∈ [0,1], then we have: (a) ϕ(X) ≼^mf_ (1+ )-SDϕ(Y) for all ϕ∈𝒰^*_γ with ϕ(x)≥ x, (b) ψ(X) ≼_ (1+ γ)-SDψ(Y) for all ψ∈𝒰^mf_. (a) Let v∈𝒰^mf_ and define u(x):=v(ϕ(x)). Then, it is sufficient to show u ∈𝒰^mf_×γ for all v and for all ϕ∈𝒰^*_ with ϕ(x)≥ x. Due to the linear structure of 𝒰^mf_ without loss of generality we can assume v∈𝒰^t_(t). Then, for any x_1<x_2≤ x_3<x_4 [ We assume u(x_4)-u(x_3)/x_4-x_3>0 (=0 case is trivial).] we have ((t)×γ)u(x_4)-u(x_3)/x_4-x_3 =(t)v(ϕ(x_4))-v(ϕ(x_3))/ϕ(x_4)-ϕ(x_3)×γϕ(x_4)-ϕ(x_3)/x_4-x_3 ≤v(ϕ(x_2))-v(ϕ(x_1))/ϕ(x_2)-ϕ(x_2)ϕ(x_2)-ϕ(x_1)/x_2-x_1 =u(x_2)-u(x_1)/x_2-x_1. In addition, for every x ≥ t, we have that u(x) =v (ϕ(x)) ≥ v (x) =v(t). Hence, u∈𝒰^t_(t)×γ⊆𝒰^mf_×γ. (b) Similar to the previous case, it is simple to observe that for every ψ∈𝒰^mf_ and every v ∈𝒰^*_γ, function u = v ∘ψ∈𝒰^mf_γ×. Recall that when γ=1, the utility class 𝒰^*_γ consists of increasing concave functions. Then, in the special case of Theorem <ref> when γ=1 implies that: (i) the multi-fractional (1+)-SD is invariant under any increasing concave transformations ψ satisfying ϕ(x)≥ x, (ii) for every ψ∈𝒰^mf_, the stochastic dominance ψ (X) ≼_SSDψ(Y) takes place. § APPLICATIONS §.§ Local risk aversion curve Given two distribution functions F and G, denote by ℓ_F,ℓ_G ∈∪{-∞} the left end points of the support of F and G. We define ℛ(F, G)(t):=∫_ℓ_F^t(F(x) - G(x) )_- dx/∫_ℓ_F^t(F(x) - G(x) )_+ dx, t∈ [ℓ_F,∞] provided that ∫_ℓ_F^t (F(x) - G(x))_+ dx > 0 holds for all t. Note that in relation (<ref>) both integrals are non-negative, non-decreasing and continuous functions. If F has an infinite left support, meaning that ℓ_F=-∞ then the integral ∫_-∞^t(F(x) - G(x) )_+ dx>0 for all t∈ if and only if there exists a point x^* ∈ such that F(x)>G(x) for all x in (-∞, x^*]. This ensures that ℛ(F, G)(t) is well-defined for any given t∈. However, in general the strict inequality is not required for F ≼^mf_ (1+ )-SD G. For instance, having ℓ_F < ℓ_G ensures the well-definiteness of ℛ(F, G)(t) for every t ∈ℝ. Clearly, these conditions do not place major constraints on the provided distribution functions. This ensures that the function ℛ(F, G) is well-defined for a wide class of distribution pairs. In addition, one has to observe that ℛ(F, G) is a non-negative, continuous function and takes value zero as soon as ℛ(F, G) is well-defined. As it follows from Theorem 2.4 of <cit.>, for given distribution functions F and G, the smallest γ∈ [0,1] where F ≼_(1+γ)-SD G holds can be defined as γ_min(F,G):=sup_t∈ [ℓ_F,∞)ℛ(F, G)(t). The term γ_min(F,G) can be seen as an index of risk aversion that requires a decision maker to prefer G over F in (1+γ)-SD sense. Next, we give the definition of the local version of γ_min(F,G) that makes it necessary for a decision maker to prefer G to F in (1+)-SD sense. Let F and G be distribution functions which ℛ(F, G) is well-defined. We define the local risk aversion curve associated to F and G via t ↦_min(F, G)(t):= sup_s ≤ tℛ(F, G)(s), t∈ [ℓ_F,∞). Observe that ℛ(F, G) is a continuous function, and therefore, its supremum is also continuous. As a result, _min(F,G) is a non-decreasing, non-negative, and continuous function that always starts taking values from 0. Let F and G be distribution functions which ℛ(F, G) is well-defined. Let be a non-decreasing function. Then, we have the following statement; (a) F ≼_FSD G if and only if _min(F, G)≡ 0. (b) F ≼_SSD G if and only if _min(F, G)≤ 1. (c) F ≼^mf_ (1+ )-SD G if and only if (t)≥_min(F, G)(t) for all t∈. (d) _min(F, G)=1 if and only if _min(F, G)≤ 1 and either F and G have the same lower partial moments at some t^'∈ or μ_F=μ_G. Proof of parts (a)+(b)+(c) are straightforward. Hence, we only prove (d). First suppose that _min(F, G)=sup_t∈_min(F, G)(t)=1 Then, ℛ(F, G)≤ 1 and this implies that either μ_F=μ_G in which leads to lim_s→∞ℛ(F, G)(s)=1 or, if μ_G>μ_F, then there must exists a t^'∈ where F and G have identical lower partial moments, i.e., ∫_-∞^t^' (F(x)-G(x))_+dx=∫_-∞^t^' (F(x)-G(x))_-dx (which means ℛ(F, G)(t')=1). The other direction follows from the fact that _min(F, G) is non-decreasing and bounded above by 1 (by assumption). Hence, if there exists a t’∈ such that _min(F, G)(t')=1, then obviously _min(F, G)=1 , otherwise, i.e., μ_F=μ_G, and hence lim_t→∞ℛ(F, G)(t)=1, so the claim follows. As it can be verified from Proposition <ref>, if (F, G) ∈E_mf() for some suitable , then _min(F,G) ∈ [0,1]. Thus, _min(F,G) is the smallest element in {: → [0,1] : F ≼^mf_ (1+ )-SD G }. In addition, the relation _min(F,G)(t) ≤γ_min(F,G), t ∈ holds true with _min(F,G)=γ_min(F,G). Next, we provide the following examples to illustrate the multi-fractional forms of <cit.> respectively. [Single crossing] Let F and G be arbitrary distribution functions having a single crossing at point t_1∈, i.e., F(x)≥ G(x) for x≤ t_1 and F(x)< G(x) when x>t_1. Then _min(F,G)(t)= 0 t≤ t_1 1∫_t_1^t(G(x) - F(x) ) dx/∫_-∞^t_1(F(x) - G(x) )dx t>t_1. [Multiple crossings] Let F and G be two arbitrary distribution functions. Suppose there exist M ∈ℕ and t_1 ≤ t_2 ≤ ⋯≤ t_M with t_0=-∞ and t_M+1=∞ such that F(x) ≥ G(x) for t_i-1≤ x ≤ t_i if i is odd and F(x) < G(x) for t_i-1 < x < t_i if i is even where i=1, …, M+1. Denote A_i=∫_t_i-1^t_i (F(x)-G(x))_+dx B_i=∫_t_i-1^t_i (G(x)-F(x))_+dx. First, observe that ℛ(F,G) follows a pattern of non-decreasing values over the intervals (t_i-1, t_i) when i is even and becomes non-increasing when i is odd. Following this pattern, _min(F,G) exhibits non-decreasing behavior over intervals (t_i-1, t_i) for even i as well, but as a consequence of Definition <ref>, it remains constant over (t_i-1, t_i) when i is odd. Therefore, in multiple crossings scenario, _min(F,G) does not have a straightforward closed form as in Example <ref>. To be more precise, _min(F,G) has the following form: _min(F,G)(t)= 0 t≤ t_1 1∫_t_1^t(G(x) - F(x) ) dx/A_1 t_1<t≤ t_2 1B_2/A_1 t_2<t≤ t^*_3 1B_2+∫_t_3^t (G(x) - F(x) ) dx/A_1+A_3 t^*_3<t≤ t_4 1B_2+B_4/A_1+A_3 t_4<t≤ t^*_5 1B_2+B_4+∫_t_5^t (G(x) - F(x) ) dx/A_1+A_3+A_5 t^*_5<t≤ t_6 ⋮ provided that there exists such t_3≤ t^*_3 ≤ t_4 and t_5≤ t^*_5 ≤ t_6 where B_2+∫_t_3^t^*_3(G(x) - F(x) ) dx/A_1+A_3 =B_2/A_1 B_2+B_4+∫_t_5^t^*_5(G(x) - F(x) ) dx/A_1+A_3+A_5 =B_2+B_4/A_1+A_3. §.§ Maximal feature of local risk aversion In this section, we shortly discuss the concept of local risk aversion curve through the utility class. For given distribution functions F and G, we consider the multi-fractional utility space 𝒰^mf__min(F,G) induced by the local risk aversion curve _min (F,G) introduced in Definition <ref>. For any non-decreasing function such that (t)<_min(F,G) (t), for every t, by Proposition <ref> item (c), we can easily deduce that F ⋠^mf_ (1+ )-SD G. This implies that the class of decision makers represented by 𝒰^mf_ does not unanimously agree on the preference of G over F. However, it is important to note that this result only offers qualitative insights. It does not necessarily suggest that all decision makers within 𝒰^mf_ do not prefer G over F. In fact, the largest set of decision makers who prefer G to F is represented by the utility class 𝒰^mf__min(F,G) which satisfies the following inclusion relations: 𝒰^*_γ_min(F,G)⫋𝒰^mf__min(F,G)⫋𝒰^mf_. On the other hand, for given F and G as above, let denote Γ(F,G):={:→ [0,1]: non-decreasing and _min(F,G)(t)≤(t) for all t ∈}. Observe that the set Γ(F,G) coincides with those non-decreasing functions such that F ≼_ (1+ )-SD G. Note that, we have 𝒰^mf_⊆𝒰^mf__min(F,G), for all ∈Γ(F,G). This implies that every decision maker within 𝒰^mf_ prefers G over F if ∈Γ(F,G)[Section 4 of <cit.> contains a relevant discussion on the lower bound of a decision maker's Arrow–Pratt risk aversion coefficient.] and the utility class 𝒰^mf__min(F,G) provides us with the maximal class of decision makers who prefers G to F. Further investigations of the maximal set 𝒰^mf__min(F,G) offers valuable insights into the risk attitudes of decision makers and the characteristics of the given distributions that influence the preference. In order to illustrate the concept further and highlight the differences between the estimated risk aversion index in the fractional (1+γ)-SD and the local risk aversion curve in the multi-fractional (1+)-SD framework, we provide the following example. Consider a choice between a riskless asset δ_μ∼ F_1 (dirac measure at point μ∈) and risky asset with the mean return μ and the distribution function: F_2(x)= 0 x < μ-ε 1/2 μ-ε≤ x < μ+ε 1 μ+ε≤ x where μ>ε>0. Clearly, F_2 ≼_SSD F_1. As in the language of <cit.>, one can interpret F_2 as the mean-preserving spread of F_1. A classical result states that risk averse decision makers with concave utility functions prefer riskless assets over risky assets that offer the same mean return. This preference is specific and fully captures the risk averse behavior of decision makers. Hence, any risk averse decision maker whose utility belongs to the SSD utility class 𝒰^*_1 would prefer the sure gain from F_1 over the risky option F_2. Moreover, we have _min(F_2,F_1)(t)= 0 μ-ε≤ t ≤μ t-μ/ε μ< t <μ+ε 1 μ+ε≤ t. Note that γ_min(F_2,F_1)=1 and therefore, _min(F_2,F_1)≤γ_min(F_2,F_1)=1. Thus, the corresponding utility class 𝒰^*_γ_min(F_2,F_1) =𝒰^*_1 in the fractional stochastic dominance framework aligns with the previous discussion. However, the multi-fractional approach demonstrates that this behavior is not limited to risk averse decision makers. The reason is that the utility class 𝒰^mf__min(F_2,F_1) includes not only risk averse decision makers, but also those who exhibit risk loving behaviour to some extent, precisely those in 𝒰^mf__min(F_2,F_1)∖𝒰^*_γ_min(F_2,F_1). Now, let introduce another risky asset with the distribution function: F_3(x)= 0 x < μ-ν 1/2 μ-ν≤ x < μ+ν 1 μ+ν≤ x where μ>ν>ε>0. As it can be seen, F_3 is more spread out than F_2 provided that ν>ε. Therefore, it follows that F_3≼_SSDF_2 ≼_SSD F_1. Given their greater spread, F_2 and F_3 represent riskier options compared to F_1. Therefore, any risk averse decision maker would strongly prefer the riskless asset F_1 over the riskier options F_2 and F_3. Moreover, we have _min(F_3,F_1)(t)= 0 μ-ν≤ t ≤μ t-μ/ν μ< t <μ+ν 1 μ+ν≤ t and _min(F_3,F_2)(t)= 0 μ-ν≤ t ≤μ + ε t-μ-ε/ν-ε μ+ε< t <μ+ν 1 μ+ν≤ t. It follows that 1=γ_min(F_2,F_1)= γ_min(F_3,F_1) = γ_min(F_3,F_2) and moreover 1≥_min(F_2,F_1)(t)≥_min(F_3,F_1)(t)≥_min(F_3,F_2)(t), ∀ t∈. As a result we have 𝒰^*_1⫋𝒰^mf__min(F_2,F_1)⫋𝒰^mf__min(F_3,F_1)⫋𝒰^mf__min(F_3,F_2). This suggests that in fractional stochastic dominance framework, the utility classes induced by γ_min do not differentiate between the classes of decision makers based on their preferences, regardless of the specific pair distributions being considered. Instead, it yields the smallest estimated class that corresponds to SSD. In contrast, multi-fractional stochastic dominance captures the differences in risk levels among the given pairs due to generating three distinct utility classes. All those are illustrated in Figure <ref> In the following example, we illustrate yet another scenario where the local risk aversion curve in multi-fractional SD framework can be useful. We introduce a new methodology based on _min(F,G) to quantify how strong the dominance of a given distribution G is over F compared to the dominance of another distribution R over F. [Strength of dominance] Suppose that there are two different assets with distribution functions G and R. An investment consultant wishes to advise his clients on which funds to invest in, aiming to select one that outperforms the currently held asset with the distribution function F. While both assets G and R dominate the benchmark asset F in the SSD sense, they cannot be compared to one another using SSD criteria, i.e., G ⋠_SSD R and R ⋠_SSD G. Consequently, G ⋠^mf_ (1+ )-SD R and R ⋠^mf_ (1+ )-SD G for any non-decreasing function. Suppose that _min(F,G)(t) ≤_min(F,R)(t), for all t ∈ with strict inequality at some t ∈. Then, we say G dominates F more than R. We use the term "more" to indicate that (1+ _min(F,G))-SD is stronger than (1+ _min(F,R))-SD. This means that a larger group of investors prefer G over F compared to R since 𝒰^mf__min(F,R)⫋𝒰^mf__min(F,G). Therefore, even though G and R are not comparable in SSD sense, if the investment consultant selects G over F instead of R, this enables him to make a decision that accommodates to a larger class of individuals. That is, he chooses an option that is not only preferred by a subset of his clients, but also potentially by a broader range of investors. Hence, utilizing the local risk aversion curve in the MFSD framework can be useful in situations involving decision problems with more than one alternative asset and a benchmark. §.§ Location-scale families This section is motivated by <cit.>. Let distribution functions F and G be from the same location-scale family, namely that F (x) = H (x-μ_F/σ_F), and G(x)= H ( x-μ_G/σ_G) where H is a distribution function with mean μ_H=0 and standard deviation σ_H=1. It is classical that F ≼_FSD G if and only if μ_F < μ_G and σ_G = σ_F and moreover, F ≼_SSD G, if and only if μ_F ≤μ_G and σ_G ≤σ_F with at least one strict inequality. It is important to note that when σ_F = σ_G, the distributions never intercept. Furthermore, if μ_F ≤μ_G and σ_F > σ_G, the distribution functions F and G intercept at most once, occurring at a point denoted as t_1. This intersection point can be determined using the formula: t_1 = Δμ/Δσσ_F + μ_F, where Δμ = μ_G - μ_F, and Δσ = σ_F - σ_G. Note that when μ_F = μ_G, the single crossing point t_1 coincides with μ_F. Let, μ_F ≤μ_G and σ_G <σ_F. As it can be easily verified, we have t-μ_G/σ_G≥t-μ_F/σ_F for all t≥ t_1. Then, the area between graphs of F and G after the crossing point, t_1 to a given t∈ is calculated as ∫_t_1^t( G(x) -F(x) )dx = ∫_t_1^t( 1 -F(x) )dx - ∫_t_1^t( 1 -G(x) )dx = ∫_t_1^t 1-H (x-μ_F/σ_F)dx-∫_t_1^t 1-H (x-μ_G/σ_G)dx = σ_F ∫_Δμ/Δσ^t-μ_F/σ_F (1-H (z))dz-σ_G∫_Δμ/Δσ^t-μ_G/σ_G (1-H (z))dz = Δσ∫_Δμ/Δσ^t-μ_F/σ_F( 1 - H (z) )dz - σ_G ∫_t-μ_F/σ_F^t-μ_G/σ_G( 1 - H (z) )dz and similarly the area between F and G before the crossing point t_1 is calculated as ∫_-∞^t_1( F(x) -G(x) )dx= Δσ∫_-∞^Δμ/Δσ (H(z))dz. Hence, if μ_F ≤μ_G and σ_G <σ_F, then by Example <ref> we have _min(F,G)(t)= 0 t≤ t_1 Δσ∫_Δμ/Δσ^t-μ_F/σ_F( 1 - H (z) )dz - σ_G ∫_t-μ_F/σ_F^t-μ_G/σ_G( 1 - H (z) )dz/Δσ∫_-∞^Δμ/Δσ (H(z))dz t>t_1. Let distribution functions F and G belong to the same location-scale family. Then, the following statements hold: (a) _min(F, G) ≡ 0 if and only if μ_G> μ_F and σ_G = σ_F. (b) _min(F, G)≤ 1 if and only if μ_G≥μ_F and σ_G ≤σ_F with at least one strict inequality. (c) _min(F, G)=1 if and only if μ_G=μ_F and σ_G < σ_F. The proof is direct application of Proposition <ref>. Let F_1,…,F_n be sequence of continuous distribution functions belonging to same location scale family with μ_i=μ for every 1≤ i≤ n. Then, 1 ≥_min(F_2, F_1)(t)≥_min(F_3, F_2)(t)≥…≥_min(F_n, F_n-1)(t), ∀ t ∈ if and only if 0<σ_1<σ_2<…< σ_n. First, we show sufficiency. Assume that (<ref>) holds. Then, we have 1≥_min(F_i, F_i-1) for all 2≤ i ≤ n. Since μ_i = μ_i-1, it follows from Proposition <ref>, item (b), that σ_i-1 < σ_i for every 2 ≤ i ≤ n. For the other direction assume that σ_i-1 < σ_i holds for every 2 ≤ i ≤ n. This, in turn, implies that for every 2≤ i ≤ n, F_i-1 intersects with F_i exactly once at μ since Δμ_i=μ_i-1-μ_i=0 (see Equation (<ref>)). Note that if t≤μ, then _min(F_i, F_i-1)=0 for all 2≤ i ≤ n. Hence, the inequality (<ref>) is satisfied as an equality, given that all terms are zero. Next, fix t≥μ and define A(σ):= σ∫_Δμ/Δσ=0^t-μ/σ (1-H (z))dz. Then, as it follows from (<ref>) the inequality (<ref>) reduces to for all 2 ≤ i ≤ n: _min(F_i, F_i-1)= 1/∫_-∞^0 H(z) dz×A(σ_i)-A(σ_i-1)/σ_i-σ_i-1≥1/∫_-∞^0 H(z) dz×A(σ_i+1)-A(σ_i)/σ_i+1-σ_i=_min(F_i+1, F_i). Hence, we are left to show that function σ↦ A(σ) is concave. To do this, it is enough to show that it has a monotone decreasing first derivative. First, observe that A(σ) is a differentiable function. Note that we have d/d σ A(σ)= ∫_0^t-μ/σ (1-H (z))dz- (t-μ)(1-H(t-μ/σ))/σ. Now, let 0<x<y. We want to show that ∫_0^t-μ/y (1-H (z))dz- (t-μ)(1-H(t-μ/y))/y≤∫_0^t-μ/x (1-H (z))dz- (t-μ)(1-H(t-μ/x))/x. Observe that for any t>μ we have t-μ/y< t-μ/x. Next, we subtract the the l.h.s. from the r.h.s. to obtain ∫_t-μ/y^t-μ/x (1-H (z))dz+(t-μ)/y(1-H(t-μ/y))-(t-μ)/x(1-H(t-μ/x)). Since H is a continuous function, by the Mean Value Theorem there exists c∈ (t-μ/y,t-μ/x) such that ∫_t-μ/y^t-μ/x (1-H (z))dz=(1-H (c))(t-μ/x-t-μ/y). Then, by substituting it into the expression above and rearranging the terms, we obtain the following: (H(t-μ/x)-H(c))(t-μ/x)+(H(c)-H(t-μ/y))(t-μ/y) ≥ 0 since H is a non-decreasing function, the claim follows. §.§ Sure gain and the Omega ratio Let X be an arbitrary random variable with distribution function F and c∈supp(F)^∘⊆ be an element belonging to the interior of the support supp(F). Now, consider δ_c the dirac measure at point c with distribution function G. Then, it is clear that F and G intersect at c, and thus, we have [(X-c)_-]=∫_-∞^c (F(x)-G(x))_+dx=∫_-∞^c F(x)dx [(X-c)_+]=∫_c^∞ (F(x)-G(x))_-dx=∫_c^∞ (1-F(x))dx. Consequently, using Example <ref>, this leads to _min(F,G)(t)= 0 t≤ c ∫_c^t (1-F(x))dx/∫_-∞^c F(x)dx= [(X-c)_+]-[(X-t)_+]/[(X-c)_-] t>c. From the equality [X]-c=𝔼[(X-c)_+]-𝔼[(X-c)_-], one can observe that if [X]=μ_F>c, then F ≼^mf_ (1+ )-SD G cannot be holds for any non-decreasing function . This is due to _min(F,G) (t)>1 for some value of t. On the other hand, when μ_F ≤ c, by Proposition <ref> part (b), G dominates F in SSD sense. Therefore, for any non-decreasing function such that (t)≥_min(F,G)(t) for all t ∈, we have F ≼^mf_ (1+ )-SD G that provides us with a stronger dominance rule than SSD. Moreover, as the value of c increases i.e. F(c)→ 1, the dominance of G over F gets closer to FSD. For a given sure gain (benchmark value) c the omega ratio introduced in <cit.> as Ω(c)=𝔼[(X-c)_+]/𝔼[(X-c)_-]. The Omega ratio serves as a metric of the return relative to risk. Here, the return is computed as the anticipated gain that surpasses a specified benchmark, c. Conversely, the risk corresponds to the anticipated loss under the threshold of the same benchmark, c. As it is discussed in very detail <cit.>, for a given constant γ∈ [0,1], the (1+γ) fractional stochastic dominance F ≼_(1+γ)-SD G is valid as soon as γ≥Ω(c). However, in multi-fractional stochastic dominance framework, relying on Proposition <ref> part (c), the validity of the multi-fractional (1+) stochastic dominance F ≼^mf_ (1+ )-SD G requires the following condition: (t)≥Ω_c (t) : = Ω(c)-[(X-t)_+]/[(X-c)_-], ∀ t>c. We highlight that as t approach infinity, condition (<ref>) yields ≥Ω(c) in which resembles the requirement γ≥Ω(c) in the fractional (1+γ)-SD. § APPENDIX Let f:[a,b] → be a continuous function and one-sided right differentiable on (a,b). Then, there exist two points c_1,c_2 ∈ (a,b) such that f'_+ (c_1) ≤f(b) - f(a)/b-a≤ f'_+(c_2). A similar statement also holds true for the one-sided left differentiable functions. Let q_k:=a_k/b_k where a_k are non-negative and b_k are positive real numbers for k=1,...,n. Then, for every positive real numbers w_1,...,w_n, the mediant inequality states that min_1 ≤ k ≤ n q_k ≤w_1a_1 + … + w_n a_n/w_1b_1 +… + w_n b_n≤max_1 ≤ k ≤ n q_k. alpha
http://arxiv.org/abs/2307.01606v1
20230704094436
Exponentially long transient time to synchronization of coupled chaotic circle maps in dense random networks
[ "Hans Muller Mendonca", "Ralf Tönjes", "Tiago Pereira" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "nlin.CD" ]
APS/123-QED Instituto de Ciências Matemáticas e Computação, Universidade de São Paulo, São Carlos, São Paulo, Brazil Institute of Physics and Astronomy, Potsdam University, 14476 Potsdam-Golm, Germany Instituto de Ciências Matemáticas e Computação, Universidade de São Paulo, São Carlos, São Paulo, Brazil We study the transition to synchronization in large, dense networks of chaotic circle maps, where an exact solution of the mean-field dynamics in the infinite network and all-to-all coupling limit is known. In dense networks of finite size and link probability of smaller than one, the incoherent state is meta-stable for coupling strengths that are larger than the mean-field critical coupling. We observe chaotic transients with exponentially distributed escape times and study the scaling behavior of the mean time to synchronization. Exponentially long transient time to synchronization of coupled chaotic circle maps in dense random networks Tiago Pereira ============================================================================================================ § INTRODUCTION Complex nonlinear systems often exhibit collective synchronization phenomena which can play an important role for the overall functioning of a system <cit.>. Phase oscillator models can elucidate key aspects of the mechanism that generates the collective motion <cit.>. The Kuramoto model, for instance, is particularly useful in describing groups of weakly coupled oscillators such as Josephson junctions, and they can be analyzed in almost full detail in the thermodynamic limit of infinitely many oscillators. Indeed, Kuramoto himself initially studied the fully connected networks of coupled oscillators with frequency heterogeneity, and obtained the critical value of the coupling strength for the transition from incoherence to synchronized collective oscillations. <cit.>. While such predictions are obtained in the thermodynamic limit, they have been used as fruitful approaches to describe networks with finitely many oscillators <cit.>. However, recent work has shown that finite size fluctuations or sparse connections in the network can significantly impact on the overall dynamics. In fact, in certain models, synchronization cannot, even approximately, be predicted from the mean-field approximation in the thermodynamic limit <cit.>. That is, in these models, a transition to synchronization occurs or is inhibited because of finite size fluctuations <cit.>. The interplay between mean-field predictions and finite-size fluctuations for general models remains elusive and requires further investigation. In this work, we study chaotic phase maps in dense networks where the mean-field dynamics can be analyzed exactly in the thermodynamic limit. For small coupling, due to the chaotic phase dynamics, only incoherence is stable. For a range of coupling strengths, mean-field analysis predicts coexistence between complete chaotic synchronization and incoherence, and for strong coupling, the incoherence becomes unstable. Then, complete synchronization is the globally attracting state in our model. Our results are two-fold: (i) For coupling strengths with a stable coexistence of incoherence and synchronization, although incoherence is locally attracting, finite-size fluctuations can take the system into the basin of attraction of the absorbing state of complete synchronization. Starting near incoherence with uniformly distributed random oscillator phases, the distribution of transient times towards synchronization is exponential and scales as a power of the system size. (ii) Above the critical coupling strength, in dense but incomplete networks, although linear stability analysis of the mean-field equations suggests that any nonzero mean field, e.g., finite size fluctuations of the mean field, will grow exponentially fast, we observe an exponentially long chaotic transient in the incoherent state. Such a delayed transition to synchronization has so far not been described in dense networks of coupled phase oscillators or coupled chaotic maps. § MODEL OF COUPLED CHAOTIC MAPS The local phase dynamics in each node is modelled as a Bernoulli map of the circle with time steps t∈ℤ φ(t + 1) = f(φ(t)) = 2φ(t) 2π, or via the abuse of notation on the complex unit circle z = exp(iφ), we write z(t+1) = f(z(t)) = z(t)^2. This map is chaotic and structurally stable <cit.>. That is, the statistical properties of the map persist under small perturbations. Therefore, for small coupling, the maps behave as nearly independent, and no collective dynamics is possible for small coupling. In <cit.>, the global coupling of the phase dynamics is implemented as a Moebius map on the complex unit circle. The Moebius map has been shown to give exact solutions of sinusoidally forced phase dynamics <cit.>, including the Kuramoto model, Winfree-type phase equations, and via a nonlinear transformation, the dynamics of theta neurons <cit.>. It is therefore a meaningful alternative to the sine coupling in the standard circle map. Here, we use a composition of (<ref>) and a Moebius map (see Figure <ref>) z(t+1) = M(f(z(t)),Φ(t),τ(t)), where M(w,Φ,τ) = e^iΦτ+w/1+e^-iΦτ w for a coupling intensity -1<τ<1, an angle of contraction Φ∈𝕊^1, and a point w ∈𝔻={ z∈ℂ:|z|<1 } on the open complex unit disc. The family of Moebius maps is a group of biholomorphic automorphisms of 𝔻, and  via analytic continuation, these transformations map the boundary of 𝔻 bijectively onto itself. The effect of (<ref>) on the unit disc is a contraction of almost all points towards exp(iΦ) on the boundary where lim_τ→± 1M(w,Φ,τ)=±exp(iΦ) and lim_τ→ 0M(w,Φ,τ)=w. The parameter τ characterizes the strength of the contraction. For τ→ 0, the map (<ref>) approaches the uncoupled dynamics (<ref>). Moreover, the family of wrapped Cauchy distributions p(φ) = 1/2π1-R^2/|1-Re^i(φ-Θ)|^2 which includes incoherence as the uniform distribution when R→ 0 and a delta distribution at φ=Θ when R→ 1, is invariant under (<ref>) and (<ref>) <cit.>. This family of continuous phase measures, in the context of phase synchronization, is known as the Ott-Antonsen manifold, and assuming this form of phase distribution is equivalent to the so called Ott-Antonsen ansatz <cit.>. The Ott-Antonsen manifold is parameterized using the mean-field amplitude R and the mean-field angle Θ Z = Re^iΘ = ∫_0^2πe^iφ p(φ) dφ . The mean-field amplitude R is the Kuramoto order parameter <cit.>, which is zero for incoherence, i.e., a uniform phase distribution, and R=1 for complete synchronization φ_n=Θ (a.s.). Furthermore, the higher circular moments Z_q on the Ott-Antonsen manifold with q∈ℤ are integer powers of the mean field Z_q = ∫_0^2π e^iqφ p(φ) dφ= Z^q. As a consequence, phase doubling maps the circular moments as f(Z_q(t))=Z_2q(t) = Z_2^q(t) = f(Z_1(t))^q, leaving the Ott-Antonsen manifold invariant and mapping the mean-field amplitude and phase as R→ R^2 and Θ→ 2Θ. To couple the dynamics of the Bernoulli maps (<ref>), the parameters Φ(t) and τ(t) in (<ref>) should be defined as functions of the ensemble mean field. Following <cit.>, we define the contraction angle Φ(t) and the coupling intensity τ(t) as Z(t) = 1/N∑_n=1^N z_n(t) = R(t)e^iΘ(t) Φ(t) = 2Θ(t) τ(t) = tanh(ε/2R(t) ) , where ε is a coupling strength. For τ=1, when ε R→∞, the phases are contracted to a single point exp(2iΘ) on the unit circle. For small values of ε R, we can expand (<ref>) to the linear order and obtain the more familiar form of mean-field coupled circle maps with phase doubling φ_n(t+1) = 2φ_n(t) + ε R(t) sin(2Θ(t)-2φ_n(t)) + O(ε^2R^2(t)). The crucial observation is that on the Ott-Antonsen manifold, the mean-field Z=Rexp(iΘ) transforms exactly the same way via (<ref>),(<ref>) as each element z=exp(iφ) on the unit circle <cit.>; that is, Z(t+1) = M(Z^2(t),Φ(t),τ(t)). It is highly unusual that a closed analytic expression for the dynamics of the mean field can be derived and thus analyzed in coupled nonlinear dynamical systems. The reduction in infinitely dimensional microscopic dynamics to the low-dimensional dynamics of the mean-field <cit.> has been tremendously successful in the analysis of synchronization phenomena over the last decade, while the effects of the finite system size N remain difficult to . We note that the point measure of a finite ensemble of phases is never actually on the Ott-Antonsen manifold, but can, in some sense, be arbitrarily close to the so-called thermodynamic limit, i.e., the limit of the infinite system size N→∞. Applying the Ott-Antonsen ansatz to networks of phase oscillators is possible if the network structure allows for the partitioning of the vertices into a few classes of equivalent vertices. Assuming that all vertices of a class are subjected to the same sinusoidal forcing, the dynamics of the phases in the network can be reduced to the dynamics of coupled mean fields on the Ott-Antonsen manifold for each vertex class <cit.>. Additionally, heterogeneity in the oscillators and fluctuations in the forces can be incorporated into the mean field dynamics if they follow Cauchy distributions <cit.>. §.§ Mean-Field Analysis The mean-field dynamics (<ref>) can be written in terms of the polar representation Θ(t+1) = f(Θ(t)) R(t+1) =τ(t)+R^2(t)/1+τ(t) R^2(t). This means that the dynamics of the phase Θ decouples from the amplitude and will evolve chaotically. Using Equations (<ref>) and (<ref>), we obtain the amplitude dynamics R(t+1) = tanh(1/2ε R(t)) +R^2(t)/1+tanh(1/2ε R(t)) R^2(t) which describes the exact evolution of the order parameter R in a closed form. We can readily determine the fixed points of the mean-field amplitude R(t) and their linear stability. Both the complete synchronization R=1 and the complete desynchronization R=0 are fixed points of (<ref>), and change stability at unique critical points ε_1=ln(2)≈ 0.69 and ε_0=2, respectively, as determined by the eigenvalues of Jacobian of Equation (<ref>) at these fixed points. These critical points are connected by an unstable fixed point branch (ε(R_u),R_u), where ε(R_u) = 1/R_ulog((1+R_u)^2/1+R_u^2). This expression is derived from (<ref>) by setting R(t+1)=R(t)=R_u and resolving the equation for ε. This means that this system of all-to-all coupled, identical chaotic phase maps will always evolve to complete synchronization or complete desynchronization, with a small region ln(2)< ε<2 of bistability (Figure <ref>a). §.§ Extension to Networks Next, we have studied the same phase dynamics on a random network of N maps which are coupled to exactly k different, random neighbors. Here, each phase φ_n couples to a local mean field Q_n = R_n e^iΘ_n = 1/k∑_n=1^N A_nmz_m where A_nm are the entries of the adjacency matrix, i.e., equal to one if there is a link from vertex m to vertex n, but zero otherwise, and k = ∑_m=1^N A_nm is the in-degree of node n, which, for computational simplicity, we assume to be identical for all nodes. Thus, with τ_n = tanh(ε/2R_n), the dynamics of the phases coupled through a network are z_n(t+1) = e^2iΘ_n(t)τ_n(t)+z_n^2(t)/1+e^-2iΘ_n(t)τ_n(t) z_n^2(t). A class of networks is dense if lim_N→∞⟨ k⟩/N = p > 0, where ⟨ k⟩ is the mean node degree. Therefore, p is the fraction of nodes, in relation to the system size N, that an oscillator is coupled to. Since dense networks are defined in the limit of N→∞, there is no sharp distinction between sparse and dense networks of finite size. We refer to a finite network as dense if two nodes share more than one neighbor on average, i.e., ⟨ k⟩ ^2/N=p^2N>1. In large dense networks, the local mean fields of the oscillators in the neighborhood of each node (<ref>) are equal to the global mean field, with a deviation of O(1/√(k)), where k is the size of the neighborhood, i.e., the in-degree of the node. Therefore, mean-field theory should be exact for dense networks in the thermodynamic limit where ⟨ k ⟩→∞. The network model First, we wish to compare the simulation results directly with our mean-field analysis. For large random networks with a link density p=k/N and 0<p<1, the numerical simulations are time-consuming since the N local mean fields at each node in the network need to be computed in each time step. To simplify these computations, we use a random network where each node couples to exactly k different random neighbors. This model with a unique in-degree of k for each node is slightly different from the Erdös Renyi model, with a Poissonian in-degree distribution of small relative width std(k)/⟨ k ⟩∼ 1/√(k). For large k, the results of the simulations in our random network model and other random networks with uncorrelated node degrees and a vanishing relative width of the degree distribution are expected to be identical. § RESULTS §.§ Distributions of Transient Times We perform a large number M of simulations m=1… M from independent, uniformly distributed random initial phases over a maximum of T steps and record in each simulation the first time step t_m when R≥ 0.5, i.e., the transition time from an incoherent state to complete synchronization. Finite-size scaling for such a discontinuous transition is challenging <cit.>. The exponential distribution of the times t_m, according to some characteristic transition rate, can be checked in a rank plot of time points t_m, which gives the sample complementary cumulative distribution C(t)=prob(t ≥ t_m)=rank(t_m)/M <ref>a,d). An exponential tail distribution C(t) up to observation time T indicates an exponential distribution of transient times. Since the simulation time is finite, transition times t_m≥ T are not observed, which represents a problem when we are interested in the average time to synchronization. However, assuming a discrete exponential, i.e., geometric distribution, a maximum likelihood estimation of the average transition time is possible up to values considerably exceeding the observation time T (see Appendix <ref>). Denoting the number of simulations that synchronize at times t_m<T as M_T, and defining the observable values l_m=min(t_m,T), the maximum likelihood estimation of the expected value T_esc=E[t_m] for the geometric distribution is T_esc = ⟨ l_m⟩ M/M_T. with the sample mean ⟨ l_m⟩. If the transition to synchronization is observed in all simulations, i.e., M_T=M, the estimator is simply the sample mean of t_m, which is an estimator of T_esc for arbitrary transient time distributions. However, when most runs do not synchronize within the finite simulation time T, the ratio M/M_T contains additional information, and the estimated mean escape time can be much larger than the observation time. §.§ During Coexistence: Escape over the Unstable Branch In <cit.>, it was reported that the transition from incoherence to collective dynamics in sparse networks of coupled logistic maps is of the mean-field type. The analysis in <cit.> predicts a shift in the critical coupling strength in random networks of Kuramoto phase oscillators of the order ⟨ k⟩^2/⟨ k^2⟩ due to degree inhomogeneity, and 1/⟨ k⟩ due to finite size fluctuations of the local mean fields. That is, in dense, homogeneous networks with ⟨ k⟩^2/⟨ k^2⟩→ 1 and ⟨ k⟩→∞, the critical coupling strength does not change. We expected to find similar behaviors for network-coupled Bernoulli maps. In complete or almost complete networks k/N=p≈ 1 for ε < 2, there is a small probability that finite size fluctuations bring the order parameter R above the unstable branch, leading to a spontaneous transition to complete synchronization, as shown in Figure <ref>a. We first observe the scaling of the transient time in fully connected networks with p=1. For values of ε<ε_0=2.0, the transition rate to synchronization scales strongly with the size N of the system . However, for values ε>ε_0, the average transition time depends very weakly on N, as the system grows exponentially fast from a state of incoherence, with R≈ 1/√(N). We estimate a finite size scaling exponent β below the transition threshold by collapsing the curves T_esc(ε,N) using the ansatz T_esc(ε,N) = T_esc((ε-ε_0)N^β). The data are consistent with an ad hoc exponent of β = 1/3 (Figure <ref>c). §.§ Above the Critical Coupling Strength: Long Chaotic Transient Above the critical coupling strength ε>ε_0=2, we expected finite size fluctuations to grow exponentially fast and independently of N, as predicted by linear stability analysis of the mean-field equations (<ref>). Instead, for small connection probabilities 0<p<1, we have observed a chaotic transient with seemingly stationary finite size fluctuations O(1/√(N)) of the mean field (Figure <ref>). In the large N limit, the distribution of the transient times depends on the link density p with increasingly long transients as p is decreased, but it is otherwise independent of N. A coupling strength for which a transition to complete synchronization could still be observed within the simulation time was considerably larger than the mean-field critical coupling ε_0=2. That is, even in dense networks and above the mean-field critical coupling, finite size fluctuations will not necessarily result in the nucleation and exponential growth of a collective mode. Such a delayed transition to synchronization <cit.> has so far not been described in systems of coupled phase oscillators <cit.> or coupled logistic maps <cit.>. In Figure <ref>f, we plot T_esc over (ε-ε_0)p to demonstrate that the average transition time is roughly scaling as 1/p. We do not look for higher-order corrections such as a weak dependence of ε_0 on p, although the curves do not collapse perfectly. Note that the escape time is largely independent of the network size (Figure <ref>e,f). For p=0.1, 0.05, and 0.025 we have performed simulations with N=10^4 (circles) and with N=5× 10^4 (crosses) for comparison. For p=0.01, we compare network sizes N=10^4 (circles) with very time-consuming simulations in networks with N=10^5 (crosses). §.§ Discussion of Finite Size Scaling Mean field theory assumes a phase distribution on the Ott-Antonsen manifold. The characteristic function of a wrapped Cauchy distribution is the geometric sequence Z_q=Z^q of circular moments (<ref>). However, in the incoherent state with N independent uniformly distributed phases φ_n the circular moments of an ensemble Z_q = 1/N∑_n=1^N e^iqφ_n are almost independent complex numbers with a Gaussian distribution of mean zero and a variance of 1/N by virtue of the central limit theorem. The action of the Bernoulli map on the circular moments is the shift Z_q→ Z_2q, that is, it is achieved by discarding all odd circular moments. The exponential growth of the order parameter in accordance to mean field theory is expected after the distribution comes close to the Ott-Antonsen manifold, i.e., when the first few circular moments align by chance sufficiently under the mapping (<ref>); in particular, Z_2(t)≈ Z_1^2(t). Unless the directions of Z_2 and Z_1^2 align by chance, as they would on the Ott-Antonsen manifold, the subsequent contraction of strength ε R in the direction of Z_1^2 after the phase doubling may even decrease the amplitude of the order parameter. In addition, for coupling strengths ε below the critical value, R=|Z_1| must be above the unstable branch R>R_u(ε)∼ (ε_0-ε). The rate of such a random event should depend on the ratio between R_u(ε) and the standard deviation 1/√(N) of the Gaussian distribution of the complex mean field. Based on this scaling argument, the expected time to synchronize should scale asT_esc=T_esc((ε-ε_0)√(N)) below the critical coupling. The best collapse of the estimated escape times in fully connected networks of coupled Bernoulli maps was observed by scaling the distance to ε_0 with N^1/3 (Figure <ref>c), i.e., the exponential divergence of the escape time approaches ε_0 slower than 1/√(N) in the thermodynamic limit. One possibility for this discrepancy is that the scaling argument only considers the chance of R>R_u and not the alignment process of the higher-order circular moments. Above the critical coupling strength, there is only the condition of the alignment of circular moments with the Ott-Antonsen manifold for the initiation of exponential growth. Since in the incoherent state, all circular moments are random Gaussian with identical variance, the alignment process (<ref>) is strictly independent of the system size N. Once exponential growth in the direction of the Ott-Antonsen manifold occurs, the time to synchronization is logarithmic, that is, it is weakly dependent on N. However, it appears that the alignment with the Ott-Antonsen manifold needs to be stronger for networks with link densities of p<1. For small link densities, the divergence of the escape time occurs at larger values ε>ε_0. This is reminiscent of stabilization by noise <cit.>, where a system is driven away from a low-dimensional unstable manifold of a fixed point into stronger attracting stable directions. In simulations of dense random networks of coupled Bernoulli maps, we could see the independence of the mean escape time from the network size and the scaling of the escape time with roughly ∼ 1/p (Figure <ref>f). To explain this scaling, we argue that mean field theory might be extended to dense networks, where each node couples to a finite neighborhood of pN nodes in the network, and for every two nodes, these neighborhoods overlap on a set of size p^2 N (Figure <ref>b). The local mean fields are Gaussian random forces of mean value Z, variance 1/k=1/pN, and a pairwise correlation of p, which is the relative size of the overlap. The decrease in correlation between the local mean fields in networks with link densities p<1 can be interpreted as individual, finite size noise on the maps, which couple to the global mean field, plus some uncorrelated random deviation. Therefore, the contractions of the phases do not occur in the same direction for different nodes in the network. The strength of the contraction in the direction of the mean field is effectively reduced by the factor p, i.e., τ = tanh(1/2ε R) p ≈1/2ε p R shifting the coupling strength dependence of the transition time (above ε_0) by a factor of 1/p. § CONCLUSIONS We have investigated the synchronization of coupled chaotic maps in dense random networks, utilizing mean-field equations and examining network configurations with different link probabilities. Firstly, we noticed the existence of chaotic transients to synchronization within these networks. This means that the incoherent state can persist for extended periods before transitioning into synchronization. This finding led us to study the statistics of transient times and their scaling behaviors in the process of synchronization. The transition times follow exponential distributions, indicating spontaneous transitions at a constant rate. It is noteworthy that the transition from incoherence to complete synchronization only occurs spontaneously in networks of finite size. Additionally, we have observed a remarkable dependence of the transient times to synchronization on the link probability p, represented by the ratio of the in-degree to the total number of nodes, at coupling strengths where an immediate transition to synchrony would be expected from mean field theory. Whether such a delayed transition is due to the specifics of our model or is typical for a more general class of dynamics remains an open question. This research was funded by the FAPESP CEMEAI 391, Grant No. 2013/07375-0, Serrapilheira Institute (Grant No.Serra-392 1709-16124), Newton Advanced Fellow of the Royal Society (393 NAF\R1\180236), CAPES and CNPq, Grant No 166191/2018-3. § APENDIX Here, we calculate the maximum likelihood estimation for the mean value of a geometric distribution P(t;α)=(1-α)α^t for discrete values t=0,1,… of time steps when only times t<T can be observed. The expected value for the geometric distribution is E[ t ] = (1-α)∑_t=0^∞ t α^t = α/1-α. Since the times t_m, m=1… M are only observable up to step T-1, we define l_m = min(t_m,T). The probabilities for the possible values of l_m are P(l_m=T;α) = 1-(1-α)∑_t=0^T-1α^t = α^T P(l_m=t<T;α) = (1-α)α^t. The derivative of the log-likelihood of M independent observations l_m with respect to the parameter α is ∂_α P(l_1,l_2,…,l_M;α)/P(l_1,l_2,…,l_M;α) = ∑_m=1^M ∂_α P(l_m;α)/P(l_m;α). For the probabilities (<ref>),(<ref>), the derivatives are ∂_α P(l_m=T,α)/P(l_m=T,α) = T/α ∂_α P(l_m=t<T,α)/P(l_m=t<T,α) = t/α-1/1-α. For a maximum of the log-likelihood for the observed values l_m, the sum in (<ref>) is required to be zero. Inserting M-M_T times the term (<ref>) for all observations l_m=T and M_T terms (<ref>), one for each observation l_m=t<T, we obtain (M-M_T)T/α + ∑_l_m<Tl_m/α - M_T1/1-α = 0. With ⟨ l_m⟩ = 1/M∑_m=1^M l_m = 1/M((M-M_T)T + ∑_l_m<Tl_m) we can divide (<ref>) by the number M of observations and re-order the equation to obtain ⟨ l_m ⟩ M/M_T = α/1-α. However, this is exactly the expected value E[ t ] of time steps for the full geometric distribution (<ref>).
http://arxiv.org/abs/2307.00971v2
20230703124400
Fishing For Better Constants: The Prophet Secretary Via Poissonization
[ "Elfarouk Harb" ]
cs.DS
[ "cs.DS" ]
gobble Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions Lina Bariah, Hikmet Sari, and Mérouane Debbah =================================================================================== Given n random variables X_1, …, X_n taken from known distributions, a gambler observes their realizations in this order, and needs to select one of them, immediately after it is being observed, so that its value is as high as possible. The classical prophet inequality shows a strategy that guarantees a value at least half (in expectation) of that an omniscience prophet that picks the maximum, and this ratio is tight. Esfandiari, Hajiaghayi, Liaghat, and Monemizadeh introduced a variant of the prophet inequality, the prophet secretary problem in <cit.>. The difference being that that the realizations arrive at a random permutation order, and not an adversarial order. Esfandiari gave a simple 1-1/e≈ 0.632 competitive algorithm for the problem. This was later improved in a surprising result by Azar, Chiplunkar and Kaplan <cit.> into a 1-1/e+1/400≈ 0.63 4 competitive algorithm. Their analysis was a non-trivial case-by-case analysis. In a subsequent result, Correa, Saona, and Ziliotto <cit.> took a systematic approach, introducing blind strategies, and gave an improved 0.6 69 competitive algorithm. The analysis of blind-strategies is also non-trivial. Since then, there has been no improvements on the lower bounds. Meanwhile, current upper bounds show that no algorithm can achieve a competitive ratio better than 0.7235 <cit.>. In this paper, we give a 0.6 724-competitive algorithm for the prophet secretary problem. The algorithm follows blind strategies introduced by <cit.> but has a technical difference. We do this by re-interpretting the blind strategies, framing them as Poissonization strategies. We break the non-random variables into shards and argue about the competitive ratio of the algorithm in terms of events on shards. This gives significantly simpler and direct proofs, in addition to a tighter analysis on the competitive ratio. The analysis might be of independent interest for similar problems such as the prophet inequality with order-selection. arabic § INTRODUCTION AND RELATED WORK The field of optimal stopping theory concerns optimization settings where one makes decisions in a sequential manner, given imperfect information about the future, in a bid to maximize a reward or minimize a cost. The classical problem in the field is known as the prophet inequality problem <cit.>. In the problem, a gambler is presented n non-negative independent random variables X_1, … X_n with known distributions in this order. In iteration t, a random realization x_t is drawn from X_t, and presented to the gambler. The gambler can then choose to either accept x_t, ending the game, or irrevocably rejecting x_t and continuting to iteration t+1. Note that the random variable ordering is chosen adversarially by an almighty adversary that knows the gambler's algorithm. The goal of the gambler is to maximize their expected reward, where the expectation is taken across all possible realizations of X_1, …, X_n. The gambler is compared to a prophet who is allowed to make their decision after seeing all realizations (i.e can always get max(x_1, … x_n)) regardless what realizations occur. In otherwords, the prophet gets a value P with expectation P = max(X_1, … X_n). An algorithm is α-competitive, for α∈ [0,1], if ≥α·P=max_i X_i, and α is called the competitive ratio. The prophet inequality problem has a 1/2-competitive algorithm. The first algorithm to give the 1/2 analysis is due to Krengel, Sucheston and Garling <cit.>. Later, Samuel Cahn <cit.> gave a simple algorithm that sets a single threshold τ as the median of the distribution of Z=max_i X_i, and accepts the first value (if any) above τ. She showed that the algorithm is 1/2 competitive and, moreover, this is tight. Kleinberg and Weinberg <cit.> also showed that setting τ = max_i X_i/2 also gives a 1/2-competitive algorithm. Note that the above discussion assumes nothing about the distributions of X_1, …, X_n but independence. If X_1, … , X_n are [Independent and identically distributed] random variables, then Hill and Kertz <cit.> initially gave a (1 - 1/e)-competitive algorithm. This was improved by Abolhassani, Ehsani, Esfandiari, Hajiaghayi, and Kleinberg <cit.> into a ≈ 0.738 competitive algorithm, and finally improved to ≈ 0.745 in a result due to Correa, Foncea, Hoeksma, Oosterwijk, and Vredeveld <cit.>. This constant is tight due to a matching upper bound, and hence the special case is also resolved. Several variations on the prophet inequality problem have been introduced. In order of hardness, the following two variants are known: Order-Selection: The problem is the same as the prophet inequality problem, but the gambler gets to choose the order that the random variables are presented to them. Random-Order: The problem is the same as the prophet inequality problem, but the random variables realizations arrive at a random-order that is drawn uniformly from 𝕊_n. This is also known as the prophet secretary problem. In terms of the random-order problem, Esfandiari, Hajiaghayi, Liaghat, and Monemizadeh initially gave a 1-1/e≈ 0.632 competitive algorithm. This was later improved in a surprising result by Azar, Chiplunkar and Kaplan <cit.> into a 1-1/e+1/400≈ 0.63 4 competitive algorithm. While the improvement is small, the case-by-case analysis introduced was non-trivial, and shed a lot of light on the intricacies of the problem. In a subsequent elegant result, Correa, Saona, and Ziliotto <cit.> improved this to a 0.6 69 competitive algorithm by introducing the notion of blind strategies. This was a more systematic approach of bounding the competitive ratio, and did not need as much case-by-case analysis. Since then, there has been no improvements on the lower bounds. Meanwhile, current impossibility results show that no algorithm can achieve a competitive ratio better than 0.7235 <cit.>. The order-selection problem has had more progress than the random-order. Specifically, since a random-order is a valid order for the order-selection problem, then the result of Correa <cit.> of ≈ 0.669 remained the best we can do for order-selection. This was improved very recently in FOCS 2022 to a 0.7251-competitive algorithm by Peng and Tang <cit.>. This result was interesting for two reasons. First, it created a separation between the random-order and order-selection problems: Recall that no algorithm can do better than 0.7235 for random-order, and so for the first time, there was an advantage of order-selection over random-order (i.e the optimal order-selection strategy is not a random permutation). In addition, the methods developed were of interest for similar variations of the problem. Bubna and Chiplunkar <cit.> followed by quickly and showed that the analysis of Peng method cannot be improved, and gave an improved 0.725 8 (i.e improvement in 4th digit) competitive algorithm for the order-selection using a different approach. Note, that the separation result was also established independently by Giambartolomei, Mallmann-Trenn and Saona in <cit.> around the same time. While the order-selection problem has had more luck, the current best competitive ratio for random-order has not changed since the Correa <cit.> paper from ≈ 0.669. The upper bound has also not improved from 0.723. In this paper, we make the first improvement over the work of Correa and give a 0.6 724 competitive algorithm for the random-order model. The algorithm is almost the same as Correa 's algorithm, with a shift in prespective. We do this by reinterpretting the blind strategies as poissonization strategies. This not only gives a simpler analysis of their results, but also a tighter analysis that achieves a competitive ratio. There exists an algorithm for the prophet secretary problem that achieves a competitive ratio of at least . With many results in optimal stopping theory, while the improvements over the competitive ratio might be small (say in third or even fourth decimal), often times the ideas and analysis that drive these improvements are non-trivial. Our new analysis sheds intuition on why blind strategies work well, significantly simplifies existing proofs, and improves the lower bound for the prophet secretary model. The crux of the analysis is that we break the non-random variables into shards; these are random variables with CDF F_i^1/K where F_i is the CDF of X_i. By using a poissonization argument, we are able to get a closed form formula for the number of shards in any (simple) region. Finally, we argue about the competitive ratio of the algorithm in terms of events on the shards, instead of on the individual variables themselves. Our analysis might be of independent interest for similar problems such as the prophet inequality with order-selection. We sketch some ideas for achieving that in the conclusion. Organization recap introduces notation, the problem statement, and recaps the blind strategies introduced in the work of Correa <cit.>. poissonization introduces the idea of Poissonization via coupling, the main ingredient for our analysis. warmupiid is a warmup section that uses the ideas of Poissonization on the version of the problem to showcase its use, and finally main provides the improved analysis for the Non-case. § NOTATION, PROBLEM STATEMENT, AND RECAP OF BLIND STRATEGIES recap §.§ Notation When the dimension k is clear, we let e_i be the i-th basis vector of ^k (i.e all zeros except i coordinate 1). We use 𝕊_n to denote the permutation group over n elements. We use [n] to denote the set {1, …, n}. §.§ Formal Problem Definition and Assumptions Let X_1, ..., X_n be independent non-negative random variables. n realizations x_1, ..., x_n are drawn from X_1, ..., X_n respectively. Next, a random permutation σ∈𝕊_n is drawn uniformly at random, and the values are presented to a gambler in the order x_π(1), ..., x_π(n). At iteration t, the gambler can either accept the value x_π(t) ending the game, or they can irrevocably reject x_π(t) and continue on to round t+1. If by round n the gambler has not chosen a value, they get 0 reward. We note that the gambler only has access to X_1, ..., X_n beforehand, but does not know which random variable x_π(t) was drawn from or even the realization values until they are presented to them. In other words, they have no information on the random permutation chosen and the realizations before starting. Throughout the paper, we will denote Z=max(X_1, ..., X_n) as the max of the n random variables. We use as a random variable denoting the value that the algorithm gets, but also abuse notation occasionally to refer to the algorithm itself. Throughout the paper, we will assume the following assumption: We assume without loss of generality that X_1, ..., X_n are continuous. See <cit.> for justification on why this assumptions loses no generality. Intuitively, we can “approximate” the discrete CDF with a continious counterpart that behave the same almost surely in the context of this problem. There is an alternative “folklore” set up for the prophet secretary problem that is known in the community, yet the authors could not find relevant citation[If the reader is familiar with a relevant citation, the authors would appreciate learning about it.]. We will work with this view throughout the paper, so we include it here for the sake of completion. The prophet secretary problem can be thought of as each random variable X_i drawing a realization x_i, then choosing a time of arrival t_i uniformly at random from [0,1]. Then the realization arrive in the order (x_(1), t_(1)), …, (x_(n), t_(n)) where t_(1)≤…≤ t_(n) (i.e. in order of their time of arrival). Since the probability that any random permutation on the order of arrival of X_1, …, X_n happens with probability 1/n!, then this is equivalent to sampling a random permutation. One minor point however, is that the algorithm does not know the time of arrival chosen in this set up. Recall, the standard problem only provides the gambler the value of the realization, not the random permutation. However, it can be simulated by any algorithm with the following process. The algorithm generates n random time of arrivals t_1, …, t_n ∼(0,1) independently. Let t_(1)≤…≤ t_(n) be the sorted time of arrivals. The algorithm assigns the i-th realization it recieves to time of arrival t_(i). We claim this is the same as if each random variable had independently chosen a random time of arrival. folklore pairwise_indep For any variable X_i, let t_i be the time of arrival using the first process, and T_i be the time of arrival of the second process. For any x∈ [0,1], we have t_i≤ x=T_i ≤ x=x. In addition, {T_i} are independent. For x∈ [0,1], the process that independently chooses a time t_i uniformly at random from [0,1] has t_i ≤ x=x. For the second process, let σ be the random permutation drawn from 𝕊_n. For x∈ [0,1], T_i ≤ x = ∑_j=1^n t_(j)≤ x σ(i)=j Where t_(j) is the j-th order statistic of t_1, …, t_n generated by the algorithm. But then T_i ≤ x = ∑_j=1^n t_(j)≤ x σ(i)=j = ∑_j=1^n 1/n∑_β=j^n n β x^β (1-x)^n-β = 1/n∑_β=1^n n β x^β (1-x)^n-ββ = 1/n n x = x To show independence, we have for a,b ∈ [n] such that a≠ b, and x,y∈ [0,1] such that x≤ y T_a≤ x, T_b ≤ y = ∑_i=1^n ∑_j=i+1^n t_(i)≤ x, t_(j)≤ yσ(a)=i, σ(b)=j = 1/n(n-1)∑_i=1^n ∑_j=i+1^n t_(i)≤ x, t_(j)≤ y = 1/n(n-1)∑_i=1^n ∑_j=i+1^n n!/(i-1)!(j-i-1)!(n-j)!∫_0^x ∫_0^y u^i-1(v-u)^j-i-1(1-v)^n-j dv du = 1/n(n-1)∫_0^x ∫_0^y ∑_i=1^n ∑_j=i+1^n n!/(i-1)!(j-i-1)!(n-j)! u^i-1(v-u)^j-i-1(1-v)^n-j dvdu = 1/n(n-1)∫_0^x ∫_0^y n!/(n-2)! dvdu = xy = T_a≤ xT_b≤ y Where the interchange of summation and integral follows by Fubini's theorem. Higher order independence follows similarly as above. We assume without loss of generality that the algorithm has access to the time of arrival of a realization drawn uniformly and independently at random from the interval [0,1]. §.§ Types of thresholds Threshold-based algorithms are algorithms that set thresholds τ_1, …, τ_n (that are often decreasing) and accept realization x_i if and only if x_i ≥τ_i and x_1<τ_1, …, x_i-1 < τ_i-1 (i.e x_i is the first realization above its threshold). In the literature, there are two main types of threshold types used. The first is maximum based thresholding. Letting Z=max_i X_i, maximum based thresholding sets τ_i such that τ_i is the q_i-quantile of the distribution of Z. More formally, Z ≤τ_i = q_i for appropriately chosen q_i that are often non-increasing. The first work to pioneer this technique is the result by Samuel Cahn <cit.> for the standard prophet inequality that sets a single threshold τ=τ_1 = … = τ_n such that Z≤τ=1/2 (i.e the median of Z). Since then, several results have used variations of this idea, including the result of Correa on blind strategies <cit.>. Summation based thresholding on the other hand set a threshold τ such that we have ∑_i=1^n X_i ≥τ=s_i (i.e on expectation, there are s_i realizations that appear above τ). One paper that uses a variation of this idea is the work of <cit.>. One of the key contributions of this paper is relating these two kinds of thresholding techniques via Poissonization. In practice, these are not necessarily the only two types of threshold setting techniques that can work. For example, one can certainly set thresholds such that (say) ∑_i=1^n X_i ≥τ^2 = q_i. However, theoretical analysis of such techniques are highly non-trivial as one often needs to bound both Z≥τ and the probability that an algorithm gets a value above τ. With maximum based thresholding, often the bound on Z≥τ is trivial, because we choose τ as a quantile of the maximum, but bounding ≥τ is more cumbersome. On the other hand, summation based thresholding typically have simpler analysis for ≥τ, but bounding Z≥τ is harder and is distribution specific. §.§ Standard majorization argument Given a thresholding algorithm that uses thresholds τ_1, …, τ_n for the prophet secretary problem, how do we lower bound its competitive ratio? One standard idea is to use majorization that is discussed briefly in this subsection. Recall that = ∫_0^∞≥ x dx Z = ∫_0^∞Z≥ xdx Letting τ_0=0 and τ_n+1=∞, if we can guarantee that there exists c_i ∈ [0,1] such that ∀ν∈ [τ_i-1, τ_i], we have ≥ν≥ c_i Z≥ν, then we would get = ∑_i=1^n+1∫_τ_i-1^τ_i≥ν dν≥∑_i=1^n+1 c_i ∫_τ_i-1^τ_iZ ≥ν dν≥min(c_1, …, c_n+1)Z And hence c=min(c_1, …, c_n+1) would be a lower bound on the competitive ratio of . This argument is used in several results on prophet inequalities (including our result) and is often refered to as majorizing with Z <cit.>. It is useful because it allows one to only worry about comparing ≥ℓ vs Z≥ℓ in a bounded region, rather than handling the expectation in one go. §.§ Recap of Blind Strategies The blind strategies introduced by Correa <cit.> is a maximum based thresholding. Before starting, the algorithm defines a decreasing curve α:[0,1]→ [0,1]. Letting Zq be the threshold with Z≤Zq=q, the algorithm accepts the first realization x_i with x_i ≥Zα(i/n) (i.e if x_i is in the top α(i/n) percentile of Z). Letting T be a random variable for the time that a realization is selected, <cit.> get the following crucial inequality for any k∈ [n]: 1/n∑_i=1^k 1-α(i/n)≤T≤ k≤ 1-∏_i=1^kα(i/n)^1/n Their proof is non-trivial, applying ideas from Schur-convexity an infinte number of times for the upper bound, and n times for the lower bound. Later on, we give an elementary and direct proof of the above inequalities, and even tighter inequalities. Next, they use the above bounds for T≤ k to get a lower bound on ≥Zα(i/n). Combined with the trivial Z≥Zα(i/n) = 1-α(i/n), they are able to majorize blind strategies with Z to get a lower bound on the competitive ratio with respect to α. Maximizing across α curves, they get the ≈ 0.669 competitive ratio. See <cit.> for more details. § POISSONIZATION VIA COUPLING poissonization Variational Distance Consider a measurable space (Ω, ℱ) and associated probability measures P,Q. The total variational distance between P,Q is defined as d(P, Q) = 1/2|P-Q|_1 = sup_A∈ℱP(A)-Q(A). Categorical Random Variable A random variable X ∈^k is categorical and parameterized by success probabilities p∈^k if X∈{ 0, e_1, ..., e_k} with X=e_i=p_i for i=1,…, k and X= 0=1-∑_i p_i. Poisson Distribution A poisson distribution is parameterized by a rate λ, denoted (λ). A variable X∼(λ) with X∈ℕ_≥ 0 with X=k=e^-λλ^k/k! Multinomial Poisson Distribution A multinomial Poisson distribution is parameterized by k rates λ_1, … , λ_k and denoted by (λ_1, …, λ_k). Intuitively, it is a k dimensional random variable where each coordinate is an independent poisson random variable. More formally, if X∼(λ_1, …, λ_k) with X∈ℕ_≥ 0^k, then X=(n_1, …, n_k) = ∏_i=1^k e^-λ_iλ_i^n_i/n_i!. Poissonization via Coupling Coupling is a powerful proof technique in probability theory that is useful in bounding the variational distance between two (unrelated) random variables. On a high level, to bound the variational distance of variables X,Y, it is enough to find a random vector W whose marginal distributions correspond to X and Y respectively. The first result we need is a coupling result for multi-dimensional random variables. The single dimension version is known as Le Cam's theorem <cit.>, and the needed higher dimension generalizations appears in <cit.>. The proof is standard in coupling literature <cit.>. We reword it below in the form we need. <cit.> distance Let Y_1, … Y_n be n independent categorical random variables parametrized by p_1, …, p_n ∈^k. Define S_n = ∑_i=1^n Y_i with λ=∑_i p_i. Let T_n ∼(λ_1, …, λ_k). Denoting p̂_i=∑_j=1^k p_i,j, Then d(S_n, T_n) ≤ 2∑_i=1^n p̂_i^2 § WARMUP: THE CASE warmupiid In this section, we will restrict our attention to the case when X_1, …, X_n are . This will be helpful to build the intuition later on when dealing with the general case. We will also assume n→ +∞ (i.e. n is sufficiently large). This assumption will not be needed in the non-case, but will simplify the exposition in this section. §.§ Canonical boxes Since the variables are continuous, then for any q∈ [0,n], there exists a threshold τ such that ∑_i=1^n X_i ≥τ = q by the intermediate value theorem. We use q to denote such threshold throughout the paper (i.e the threshold such that on expectation, q realizations are above it). In the coming discussion, think of k→∞ and q=O(1). We fix a threshold q and break “arrival time” into a continuous space with k segments, the i-th between i-1/k and i/k. In addition, we define k+1 thresholds τ_0, τ_1, …, τ_k such that τ_i = q· i/k (with 0=+∞). The level k canonical-boxes of q are defined as the k^2 boxes □_i,j = {(x,y) | i-1/k≤ x ≤i/k and τ_j-1≤ y ≤τ_j }. See k7poissonization. Here, the indexing follows typical row (top to bottom) then column (left to right) indexing. Suppose the arrival times of the realizations are {t_i}. We say a realization x_i arrives in □_i,j if (t_i, x_i) ∈□_i,j. We would like a clean form for S ∈^k× k, where S_i,j is the number of realizations that arrive in □_i,j. We will do this by coupling the distribution with a multinomial Poisson distribution T∈^k× k that behaves identically to S as n,k→∞ (i.e |S-T|_1 → 0 as n,k→∞). We will refer to this approach as Poissonization. replace:poisson Fix q=O(1) and consider the level-k canonical boxes of q. Let S_n ∈^k× k count the number of realizations in the canonical boxes {□_i,j}. Let T_n ∈^k× k be a multinomial Poisson random variable with each coordinate rate being q/k^2. Then d(S_n, T_n) ≤2q^2/n In particular, as k,n→∞, then for any (simple) region ⊚⊆ [0,1]× [q, +∞], the probability we have r realizations in ⊚ is e^-⊚⊚^r/r! where ⊚=∑_i=1^n X_i arrives in ⊚ Consider the categorical random variable Y_r ∈^k× k for which canonical box (if any) realization r arrives in. Hence, it is a categorical random variable parametrized by p_i ∈^k× k. We have that p̂_i=X_i≥τ_k. But recall that ∑_i=1^n X_i ≥τ_k = q and so by symetry and continuity, we have p̂_i = q/n. Hence, by distance d(S_n, T_n) ≤∑_i=1^n 2q^2/n^2 = 2q^2/n. The final remark follows by the additivity of Poisson distributions (i.e. if X∼(λ_1), Y∼(λ_2), then X+Y∼(λ_1+λ_2)). Taking k,n →∞, then the variational distance is 0, and the number of realizations that falls into ⊚ is the sum of the realizations in the canonical boxes inside ⊚ (that are coupled with the Poisson variables). The proof of replace:poisson can be repeated for non-random variables assuming each X_i ≥τ_k is “small”. This is a standard idea in proofs of coupling results (say Le Cam's theorem). For example, the reader should verify that if X_i ≥τ_k≤ 1/K for some K→ +∞, then the variational distance is also 0. The proof follows almost verbatim as above. §.§ Plan of attack Using replace:poisson, and taking k,n→∞ then for any region ⊚ above q, the probability we get j realizations is e^-⊚⊚^j/j! where ⊚ is the area (read measure) of ⊚. This simplification allows us to express the competitive ratio of an algorithm as an integral as we will see shortly. We consider algorithms described by an increasing curve C:[0,1]→_≥ 0 with C(1)≤ q = O(1). At time t_i, we accept realization (t_i, x_i) if and only if x_i ≥C(t_i) = τ_C(t_i) (i.e. the threshold τ_C(x) at time x is such that the expected number of arrivals above it is C(x)). Now given a curve C, how do we determine the competitive ratio of an algorithm that follows τ_C? iid:monstrosity The competitive ratio c of the algorithm that follows curve C:[0,1]→_≥ 0 satisfies c ≥min1-e^-∫_0^1 C(x) dx, min_0≤ℓ' ≤ C(1)1-e^-∫_0^C^-1(ℓ') C(x) dx + ∫_C^-1(ℓ')^1 ℓ' e^- ∫_0^x C(y) dy dx/1-e^-ℓ'monstrosity Throughout the proof, see fig:poissinization-iid-1. Recall that C is an increasing curve. We abuse notation and set C^-1(ℓ')=1 for ℓ' > C(1) and C^-1(ℓ')=0 for ℓ' < C(0). Let be the value returned by the strategy following C. For ℓ∈ [0, C(1)], we will trivially upper bound Z≥ℓ≤ 1. Letting U={(x,y)|0≤ x ≤ 1, τ_C(x) ≤ y ≤ +∞}, then U = ∑_i=1^n X_i arrives in U = ∑_i=1^n ∫_0^1 X_i ≥τ_C(x)dx = ∫_0^1 ∑_i=1^n X_i ≥τ_C(x)dx = ∫_0^1 C(x) dx Hence, ≥ℓ = 1-U has no arrivals = 1-e^-∫_0^1 C(x)dx For ℓ∈ [C(1), +∞], letting U={(x,y)|0≤ x≤ 1, ℓ≤ y < +∞}, and ℓ' = ∑_i=1^n X_i ≥ℓ = U, we have similarly that Z ≥ℓ = 1-U has no arrivals= 1-e^-ℓ' On the other hand, we have ≥ℓ = 1-e^-∫_0^C^-1(ℓ') C(x) dx + ∫_C^-1(ℓ')^1 ℓ' e^- ∫_0^x C(y) dy dx The above equality requires unpacking, see the second row of fig:poissinization-iid-1. First, if the region A={(x,y)|0≤ x ≤τ_C^-1(ℓ), τ_C(x) ≤ y < +∞} is non-empty (i.e. contains a realization), then the algorithm returns a value at least ℓ. We have that A =∑_i=1^n X_i falls in A = ∑_i=1^n ∫_0^τ_C^-1(ℓ)X_i ≥τ_C(x)dx = ∫_0^C^-1(∑_i=1^n X_i ≥ℓ)∑_i=1^n X_i ≥τ_C(x)dx = ∫_0^C^-1(ℓ') C(x) dx Otherwise, for time x∈ [C^-1(ℓ'), 1], if the area from time 0 to time x under curve C is empty, the area from x to x+dx has a realization above ℓ, then the Algorithm returns a value above ℓ. This happens with probability ∫_C^-1(ℓ')^1 ℓ' e^- ∫_0^x C(y) dy dx. Hence, by the majorization technique discussed earlier, the competitive ratio can be lower bounded by c ≥min1-e^-∫_0^1 C(x) dx/1, min_0≤ℓ' ≤ C(1)1-e^-∫_0^C^-1(ℓ') C(x) dx + ∫_C^-1(ℓ')^1 ℓ' e^- ∫_0^x C(y) dy dx/1-e^-ℓ' Simple curves evaluate well for monstrosity. Recall, the optimal n threshold algorithm for the case attains a competitive ratio ≈ 0.745. By considering curves of the form C(x)=a_0+a_1x (i.e linear) and optimizing the expression for a_0, a_1, we are able to get α≥ 0.7 05. The ratio in iid:monstrosity can efficiently be computed for polynomials of the form C(x)=∑_i=0^d a_i x^i with a_i ≥ 0 because the integral of e^-x^i can be evaluated efficiently with the Gamma function. In particular, for a degree 11 polynomial, we can achieve a 0.721 competitive ratio. As d→∞, the algorithm competitive ratio improves. This shows that algorithms that are based on thresholds of the form ∑_i X_i ≥τ =q_i are comparable to algorithm that choose their thresholds based on the maximum distribution (i.e. quantiles of Z), at least for the case. § GENERAL CASE - REINTERPRETING BLIND STRATEGIES main We now go back to the non case. In <cit.>, Correa, Saona, and Ziliotto used Schur-convexity to study a class of algorithms known as blind algorithms. The algorithm is characterized by a decreasing threshold function α: [0, 1]→ [0,1]. Letting Zq denote the q-th quantile of the maximum distribution (i.e. Z ≤Zq=q), the algorithm accepts realization x_i if x_i ≥Zα(i/n) (i.e. if it is in the top α(i/n) quantile of Z). They characterized the competitive ratio c of an algorithm that follows threshold function α (as n→∞) as c ≥min 1-∫_0^1 α(x)dx , min_x∈ [0,1]∫_0^x 1-α(y)/1-α(x) dy + ∫_x^1 e^∫_0^y logα(w) dw dyRaimundo Looking at Raimundo, the reader might already see many parallels with monstrosity, even though one is based on quantiles of the maximum, and the other is based on summation thresholds. Correa resorted to numerically solving a stiff, nontrivial optimal integro-differential equation. They find an α function such that c ≥ 0.665 (and then resorted to other similar techniques to show the main 0.669 result). They also showed than no blind algorithm can achieve a competitve ratio above 0.675. Ideally, one would like to have algorithms that depend on summation thresholds like we did for the case. If each X_i≥τ_k is small, as is the case for the case, then we can use Poissonization. Unfortunately, we can have “superstars” in the non-case with “large” X_i ≥τ_k that mess up the error term in the coupling argument: indeed, it is no longer sufficient to use a Poisson distribution to count the number of arrivals in a region because of the non-nature of the random variables. What can we do then? The main idea is to think about “breaking” each random variable X_i with CDF F_i into K shards. More formally, we consider the iid random variables Y_i,1, ..., Y_i,K with CDF F_i^1/K[The reader should verify this is indeed a valid probability CDF.]. This is an idea that was implicitly used in <cit.>. Each shard chooses a random time of arrival uniformly from 0 to 1 independently. One can easily see that the distribution of max(Y_i,1, …, Y_i,K) is the same as X_i, and so the event of sampling from X_i and choosing a random time of arrival is the same as sampling from the shards, and taking the shard with the maximum value (and its time of arrival) as the value and time of arrival for X_i. The good thing about shards is that they have a small probability of being above a threshold as K→∞ because 1-F^1/K(τ) goes to 0, and hence the coupling argument for the case also works. Indeed, the reader can repeat the argument from replace:poisson and get a similar bound on the variational distance that is 0 as K→ +∞ (without any assumptions on n). However, the relationship between summation based thresholds on the shards {Y_i,j} and maximum-based thresholds for the actual realizations {X_i} is not clear. Consider a summation based threshold on the shards that chooses threshold τ such that ∑_i=1^n ∑_j=1^K Y_i,j≥τ = q. Because Y_i,j are iid for fixed i, then we have: ∑_i=1^n KY_i,1≥τ =q However, recall that Y_i,1≥τ=1-X_i ≤τ^1/K. Hence, we are choosing a threshold such that ∑_i=1^n K(1-X_i ≤τ^1/K) =q What happens when we take K→∞? The limit of K(1-x^1/K) as K→∞ is -log x. And so we have that for K→∞: ∑_i=1^n -logX_i ≤τ = q Or simply -logZ≤τ = q. In other words, we chose a threshold such that Z≤τ = e^-q Hence, we retrieve blind strategies, but with a twist: we now have an alternative view in terms of shards. Specifically, if we choose a thresholds τ_j such that Z≤τ_j=α_j (as in the case of blind strategies), then the number of shards above τ_j follows a Poisson distribution with rate log1/α_j. This is only possible because the probability of each shard being above τ_j is small (i.e → 0 as K→∞). To signify the importance of this view, we re-prove the following result that was proven in <cit.> via a nontrivial argument that applies a Schur-convexity inequality an infinite number of times. The short proof below establishes the same result via the new shards point of view. <cit.> simplified-upper Let T∈ [n] be a random variable for the time that the algorithm following thresholds τ_1 ≥…≥τ_n selects a value (if any) with Z≤τ_j=α_j. Then for any k∈ [n] T>k≥∏_j=1^k α_j ^1/n See simplified-upper throughout this proof. Note that T>k if and only if there are no realizations (in terms of X_i) above τ_1, ..., τ_k. Consider the event ξ of there being no shards above τ_1, ..., τ_k. Then this implies that there are no realizations (in terms of X_is) above τ_1, …, τ_k and hence T>k≥ξ[If event A implies event B, then B≥A]. Now consider the area U above τ_1, …, τ_k between time 0 and k/n. Letting α_0=1, the measure for the region is a telescoping sum: U = ∑_i=1^k k-i+1/nlog(1/α_i) - log(1/α_i-1) = ∑_i=1^k 1/nlog(1/α_j) = 1/nlog1/∏_i=1^k α_i So ξ = e^-U = ∏_i=1^k α_i^1/n <cit.> also prove the inequality T≤ k≥1/n∑_j=1^k 1-α_j. We can also prove the same inequality via an event on the shards that implies T≤ k and whose probability is the RHS. We leave this as a fun immediate exercise.[Hint: We prove it in the next section] §.§ Analysis for the Non-Case mainanalysis With the shards machinery we have built so far, we can present the new analysis. Our algorithm will be a simple m=16 threshold algorithm following a threshold function τ:[0,1]→_≥ 0. From time i-1/m to time i/m for 1≤ i ≤ m, τ is defined to be equal to τ_i with τ_1 > … > τ_m (i.e τ is a step function). We accept the first realization (t_i, x_i) with x_i ≥τ(t_i). We would like to compare Z≥ℓ vs ≥ℓ as before. For this, we again break the analysis on where ℓ lies. §.§.§ For ℓ∈ [0, τ_m) See mainproofidea:fig. We use the trivial upper bound Z≥ℓ≤ 1. On the other hand, consider when ≥ℓ. We will define an event on the shards that implies ≥ℓ. Formally, consider the event ξ where for some 1≤ j ≤ m, the region A_j={(x,y)|0≤ x ≤ 1, τ_j-1≤ y < ∞} is empty, and the region B_j={(x,y)|0≤ x ≤ 1, τ_j≤ y ≤τ_j-1} is non-empty, and the maximum value shard in B_j arrives from t=(j-1)/m to t=1. See lower-bound. Informally, this is the event where the region from [τ_1, +∞) has a shard (i.e is non empty), and the maximum shard amongst them lies from time t=0 to t=1, or the the region [τ_1, +∞) is empty, the region [τ_2, τ_1] has shards, and the maximum shard in the region lies from time t=1/m to t=1, or the region [τ_2, +∞) is empty, the region from [τ_3, τ_2] has shards, and the maximum shard in that region arrives from time t=2/m to t=1, and so on. This event implies ≥ℓ since at least one realization would exists above τ_1, ..., τ_m. The probability of this event is ξ = ∑_i=0^m-1 e^-s_i(1-e^-r_i)m-i/m r_i = log (1/α_i+1)-log(1/α_i) s_i = ∑_j=0^i-1 r_j Here, r_i denotes the shards Poisson rate between τ_i and τ_i+1, and s_i represents the area (measure) from τ_i+1 to τ_0=+∞. Simplifying via telescoping sums, we have s_i=log1/α_i, and hence we have ξ = ∑_i=0^m-1α_i(1-α_i+1/α_i)m-i/m = ∑_i=0^m-1 (α_i-α_i+1)m-i/m = 1/m∑_i=0^m 1-α_i+1 = 1/m∑_i=1^m 1-α_i If this reminds the reader of a bound, it is precisely the same bound <cit.> proved[Hint: This is the idea for the proof of T≤ k≥1/m∑_i=1^k 1-α_i]. §.§.§ For ℓ∈ [τ_j, τ_j-1] Again, see mainproofidea:fig. Let α_t = Z≤ℓ. We know α_j≤α_t ≤α_j-1. We also know Z> ℓ=1-α_t. Now, we want to compute the probability that ≥ℓ. We again give an event ξ on the shards that implies ≥ℓ. We first express the probability, and then explain what the event is: ξ =f_j(α, α_t) =1/m∑_k=1^j-11-α_k + ∑_k=j^m ∏_ν=1^k-1α_ν^1/m w_k q_t, k w_k = ∑_ν=0^j-1 e^-s_ν (1-e^-r_ν)1/m-(k-1)+ν r_ν =m-(k-1)+ν/mlogα̂_ν/α̂_ν+1 α̂_̂ν̂ = α_ν if ν≤ j-1 and α_t if ν=j s_ν = ∑_β=0^ν-1 r_β q_t, k = ∑_β=0^+∞ e^-1/mlogα_t/α_k1/mlogα_t/α_k^β1/β!1/β+1 = 1- α_k/α_t^1/m/1/mlogα_t/α_kunstable Explaining ξ will take the entirety of this subection. We recommend looking at mainproofidea:fig throughout the explanation. Formally, ξ consists of a disjoint union of m-j+2 events. The first event χ is the event that T≤ j-1. The next m-j+1 events η_k, j ≤ k ≤ m is such that event η_k is that there are no shards above τ_1, ..., τ_k-1 (the pink region in mainproofidea:fig), that there are shards above ℓ (the yellow region in mainproofidea:fig), and that the maximum value shard v from the yellow shards arrives between time t=(k-1)/m to t=k/m, and that v's time of arrival is before all the shards that appear from t=(k-1)/m to t=k/m between τ_k and ℓ (the green region in mainproofidea:fig). As we saw in the last section, the probability of χ happening is at least 1/m∑_k=1^j-1 1-α_k. This would also correctly imply ≥ℓ. First, on why {η_k} events imply ≥ℓ. If there are no shards above τ_1, ..., τ_k-1 (pink region) then this implies T≥ k. If there are shards above ℓ (in the yellow region) and the maximum shard falls from t=(k-1)/m to t=k/m then there is at least one realization v (from X_is) that is between t=(k-1)/m to t=k/m. If v arrives before all shards between τ_k and ℓ between time (k-1)/m and k/m (i.e green region), then the realization corresponding to v would be chosen by the Algorithm before any potential realization corresponding to the shards between τ_k and ℓ. Hence, ≥ℓ. Breaking the RHS further, for fixed k, the first term ∏_ℓ=1^k-1α_ℓ^1/m term computes the probability that there are no shards in the first k-1 thresholds as seen in the last section. The second term w_k computes the probability that the shard with maximum value (in the yellow region) falls between t=(k-1)/m to t=k/m, and multiplies that by q_t, k which is the probability that this maximum shard appears before any shards between t=(k-1)/m and t=k/m and with value between τ_k and ℓ (the green region). One last remark is on using α̂_̂î vs α_i. There is a corner case in the summations where we should use α_t instead of α_j, and so represent this conditional usage using α̂_̂î. §.§ Combining everything Combining all of this, we can now express the competitive ratio of a blind strategy using α_1, ..., α_m using the following expression. The competitive ratio c satisfies c ≥min_1≤ j ≤ m+1min_α_j≤α_t ≤α_j-1f_j(α , α_t)/1-α_t This expression can be maximized for α satisfying α_0=1 > α_1 > … > α_m > α_m+1=0. We used Python to optimize the expression and report m=16 alpha values in the appendix with c≥. We also provide our Python code in the appendix as a tool to help the reader verify our claims. A lot of effort was spent to make sure the naming and indexing used in the paper match the code identically to help a skeptic reader verify the claim. All computations were done with doubles using a precision of 500 bits (instead of the default 64). The function 1- α_k/α_t^1/m/1/mlogα_t/α_k in unstable is numerically unstable for close values of α_k, α_t. To resolve this, we lower bound it by truncating the summation on the LHS to 30 terms (instead of ∞) and use that as a lower bound on q_t,k. This is refered to as “stable_qtk” in the code. We finally get the main result. There exists an m=16 threshold blind strategy for the prophet secretary problem that achieves a competitive ratio of at least . § CONCLUSION The main ingredient in our analysis is breaking the non-random variables into shards, and arguing about the competitive ratio of the algorithm using events on the shards, rather on the random variables directly. This is possible due to our application of Poissonization technique. This analysis gives significantly simpler proofs of known results, but also tighter competitive ratios. A conjecture in the field is that the optimal competitive ratio for the non-prophet inequality with order-selection is the same as the optimal prophet-inequality ratio for random variables (i.e ≈ 0.745). One possible way of achieving this is choosing a different time of arrival distribution for each random variable. This is an idea that was employed in the recent result by Peng . Together with the shards point of view, it might be possible to argue that the behavior of the shards (with different time of arrival distributions) can mimic the realizations more closely than otherwise using a uniform time of arrival, allowing the results for the case to go through. We leave this as a potential future direction. unsrt § CODE Requires libraries: * numpy (Tested with version 1.21.5) * scipy (Tested with version 1.7.3) * mpmath (Tested with version 1.2.1) import numpy as np from scipy.optimize import minimize import mpmath as mp from mpmath import mpf import scipy m = 16 #m parameter from paper mp.dps = 500 #This will force mpmath to use a precision of #500 bits/double, just as a sanity check def stable_qtk(x): #The function (1-e^(-x))/x is highly unstable for small x, so we will # Lower bound it using the summation in Equation 13 in the paper ans = 0 for beta in range(30): ans += mp.exp(-x) * x**beta / mp.factorial(beta) * 1/(beta+1) return ans #Computes f_j(alphas, alphat) in time O(m^2) def fj(j, alphas, alphat): part1 = 0 for k in range(1, j): #Goes from 1 to j-1 as in paper part1 += 1/m * (1-alphas[k]) #alphas_hat[nu]=alphas[nu] if nu<=j-1 and alphat if nu==j alphas_hat = [alphas[nu] for nu in range(j)] + [alphat] part2 = 0 for k in range(j, m+1): #Goes from j to m as in paper product = 1 for nu in range(1, k): #Goes from 1 to k-1 product *= (alphas[nu]**(1/m)) wk = 0 s_nu = 0 for nu in range(j): #from 0 to j-1 r_nu = (m-(k-1)+nu)/m * mp.log(alphas_hat[nu]/alphas_hat[nu+1]) wk += mp.exp(-s_nu)*(1-mp.exp(-r_nu)) * 1/(m-(k-1)+nu) s_nu += r_nu q_t_k = stable_qtk( 1/m * mp.log(alphat/alphas[k]) ) part2 += product * wk * q_t_k return part1 + part2 def evaluate_competitive_ratio(alphas): assert np.isclose(np.float64(alphas[0]), 1) #first should be 1 assert np.isclose(np.float64(alphas[-1]), 0) #Last should be 0 assert len(alphas)==(m+2) competitive_ratio = 1 for j in range(1, m+2): #Goes from 1 to m+1 as in paper #Avoid precision errors when alphat  1, subtract 1e-8 alphat_bounds = [(np.float64(alphas[j]),np.float64(min(alphas[j-1], 1-1e-8)))] x0 = [np.float64((alphas[j]+alphas[j-1])/2)] res = minimize(lambda alphat: fj(j, alphas, alphat[0])/(1-alphat[0]), x0=x0, bounds=alphat_bounds) """ As a sanity check, we will evaluate fj(alphas, x)/(1-x) for x in alphat_bounds and assert that res.fun (the minimum value we got) is <= fj(alphas, x)/(1-x). This is just a sanity check to increase the confidence that the minimizer actually got the right minimum """ trials = np.linspace(alphat_bounds[0][0], alphat_bounds[0][1], 30) #30 breaks min_in_trials = min([ fj(j, alphas, x)/(1-x) for x in trials ]) assert res.fun <= min_in_trials """ End of sanity check """ competitive_ratio = min(competitive_ratio, res.fun) return competitive_ratio alphas = [mpf('1.0'), mpf('0.66758603836404173'), mpf('0.62053145929311715'), mpf('0.57324846512425975'), mpf('0.52577742556626594'), mpf('0.47816906417879007'), mpf('0.43049233470891257'), mpf('0.38283722646593055'), mpf('0.33533950489086961'), mpf('0.28831226925828957'), mpf('0.23273108361807243'), mpf('0.19315610994691487'), mpf('0.16547915613363387'), mpf('0.13558301500280728'), mpf('0.10412501367635961'), mpf('0.071479537771643828'), mpf('0.036291830527618585'), mpf('0.0')] c = evaluate_competitive_ratio(alphas) print(c)
http://arxiv.org/abs/2307.01462v2
20230704034942
Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection
[ "Minh-Quan Dao", "Julie Stephany Berrio", "Vincent Frémont", "Mao Shan", "Elwan Héry", "Stewart Worrall" ]
cs.RO
[ "cs.RO", "cs.CV" ]
Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection Minh-Quan Dao^1, Julie Stephany Berrio^2, Vincent Frémont^1, Mao Shan^2, Elwan Héry^1, and Stewart Worrall^2 This work has been supported by the French-Australian Science and Innovation Collaboration (FASIC) and the Australian Centre for Robotics (ACFR). The authors are with Ecole centrale de Nantes^1 (France) E-Mails: {minh-quan.dao, vincent.fremont, elwan.hery}@ls2n.fr and the ACFR at the University of Sydney^2 (Australia). E-mails: ^2 {j.berrio, m.shan, s.worrall}@acfr.usyd.edu.au Department of Mathematics, Southern University of Science and Technology, Shenzhen, China E-mail: [email protected] =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Occlusion is a major challenge for LiDAR-based object detection methods as it renders regions of interest unobservable to the ego vehicle. This challenge becomes safety-critical in urban traffic, such as intersections, where the ego vehicle must have reliable object detection to avoid collision while its field of view is severely reduced due to the obstruction posed by a large number of road users it tries to avoid colliding with. Collaborative perception via Vehicle-to-Everything (V2X) communication, which leverages the diverse perspective thanks to the presence at multiple locations of connected agents to form a complete scene representation, is an appealing solution. The major challenge of V2X collaboration is the performance-bandwidth tradeoff which presents two questions (i) which information should be exchanged over the V2X network, and (ii) how the exchanged information is fused. The current state-of-the-art resolves to the mid-collaboration approach where the Bird-Eye View (BEV) images of point clouds are communicated so that the bandwidth consumption is lower than communicating point clouds as in early collaboration, and the detection performance is higher than late collaboration, which fuses agents' output, thanks to a deeper interaction among connected agents. While achieving strong performance, the real-world deployment of most mid-collaboration approaches is hindered by their overly complicated architectures, involving learnable collaboration graphs and autoencoder-based compressor/ decompressor, and unrealistic assumptions about inter-agent synchronization. In this work, we devise a simple yet effective collaboration method that achieves a better bandwidth-performance tradeoff than prior state-of-the-art methods while minimizing changes made to the single-vehicle detection models and relaxing unrealistic assumptions on inter-agent synchronization. Experiments on the V2X-Sim dataset show that our collaboration method achieves 98% of the performance of an early-collaboration method, while only consuming the equivalent bandwidth of a late-collaboration method (bandwidth reduced to 0.03% of the early method). Code will be released at https://github.com/quan-dao/practical-collab-perception Article submission, IEEE, IEEEtran, journal, , paper, template, typesetting. § INTRODUCTION The detection and localization of objects as 3D bounding boxes is a fundamental module of an autonomous driving software stack. It provides input to downstream tasks such as object tracking, motion prediction, and navigation. Due to the requirement of localizing objects accurately in 3D, state-of-the-art methods use LiDARs as a primary sensing modality. While significant advancements have been made, occlusion remains troublesome for LiDAR-based detection models, which makes regions of interest invisible to the ego vehicle. The issue of occlusion becomes particularly critical when navigating complex traffic such as intersections as autonomous vehicles try to make good detection to avoid collision using a field of view being severely reduced due to a large number of road users. The need for addressing the occlusion at intersections is highlighted in a report published by Waymo <cit.> stating that two out of eight severe accidents involving their autonomous vehicles are due to occlusion in the intersections. Vehicle-to-Everything (V2X) collaborative perception is a promising solution to the occlusion challenge. Its core idea is to form a complete scene representation using measurements collected from multiple perspectives by leveraging the communication among multiple connected agents presenting at different locations. Connected agents can be either autonomous vehicles or Intelligent Road-Side Units (IRSU) which are advanced sensing systems strategically placed at elevated locations to have maximal coverage of regions where complex traffic takes place. This enhanced perception capacity, thanks to V2X, comes with several new technical challenges; the most notorious among them being the performance-bandwidth tradeoff which presents two questions; (i) which information should be broadcast, and (ii) how the exchanged information should be fused. This tradeoff establishes a spectrum of solutions ranging from early to late collaboration. Raw measurements, which are point clouds in the context of this paper, are exchanged in the framework of early collaboration to maximally reduce the impact of occlusion, thus achieving the highest performance at the expense of spending an astronomical amount of bandwidth. On the other extreme, late collaboration exchanges high-level outputs (e.g., object detection as 3D bounding boxes) to minimize bandwidth usage while limiting the performance gain thanks to collaboration. In an attempt to balance the two mutually excluding design targets, research on V2X collaboration frameworks <cit.> are drawn toward the middle of this spectrum, thus the category's name of mid-collaboration, where intermediate representations such as bird-eye view (BEV) images of agents' surrounding environment are chosen for broadcasting. While the motivation is just, most mid-collaboration methods require making substantial changes to the architecture of single-agent perception models to accommodate the fusion module where the combination of exchanged representations takes place. More importantly, these methods make strong assumptions about data synchronization among connected agents. For example, DiscoNet <cit.> considers a perfectly synced setting where agents share the same clock, collect and process point clouds at the same rate and the transmission/ receiving of BEV images experience zero latency. V2VNet <cit.> and ViT-V2X <cit.> account for latency by postulating a global misalignment between exchanged BEV images and that of the ego vehicle. The cause of such misalignment is the movement of the ego vehicle between the time step it queries the V2X network and the time when the exchanged BEV images are received. This implicitly assumes that agents in the V2X network obtain point clouds synchronously. Another drawback of the current state-of-the-art of V2X collaborative perception is that only one point cloud per agent is used. Given that objects obscured in one frame may become visible in subsequent frames due to their movement or the motion of connected agents, and sparse regions might become dense as they draw nearer, harnessing sequences of point clouds is a compelling strategy to improve the performance of collaborative perception. In fact, the rich literature on multi-frame methods for single-vehicle object detection <cit.> has confirmed the effectiveness of point cloud sequences as a simple concatenation of point clouds in a common frame can boost detection accuracy by approximate 30% <cit.>. Aware of the aforementioned drawbacks of previous works on V2X collaborative perception, we seek a practical collaboration framework that emphasizes minimal: * Bandwidth consumption * Changes made to single-agent models * Inter-agent synchronization assumptions We aim our design at minimal bandwidth consumption, which can only be achieved by exchanging information about the objects detected by each agent. This design choice naturally satisfies the second design target as it dismisses the need for complex mid-representation fusion modules. We achieve the final target by only assuming that connected agents share a common time reference which is practically achievable using GPS time. The challenge posed by our relaxed inter-agent synchronization assumption is that information (i.e., detected objects) broadcast by agents in the V2X network may never have the same timestamp as the query made by the ego vehicle. In other words, the detections made by other agents that are available on the V2X network always have an older timestamp compared to the current timestamp of the ego vehicle. This timestamp mismatch results in a misalignment between exchanged detected objects and their associated ground truths (if the detections are true positives), thus risking the overall performance. Our solution to this issue lies in the information that prior works have neglected - point cloud sequences. We reason that objects detected in the past can be propagated to the present if their velocities are available. The prediction of objects' velocities pertains to motion prediction <cit.> or object tracking <cit.> which can negate our minimal architecture changes design target. Since we assume agents produce predictions at least at their rate acquiring point clouds, we only need short-term (e.g., 0.2 seconds if point clouds are collected at 10Hz) velocity prediction, which can be computed using the scene flow, rather than long-term prediction (e.g., 3 seconds <cit.>) offered by motion prediction or tracking. As a result, we use the scene flow estimation plug-in module developed by our previous work <cit.>. This choice effectively makes our V2X collaboration framework a multi-frame method, thus enabling each connected agent, as an individual, to enjoy a boost in detection accuracy as single-vehicle multi-frame methods do. This paper makes the following contributions: * Deriving a practical framework for V2X collaborative perception that consumes minimal bandwidth while achieving competitive performance compared to the early collaboration in the perfect synchronization setting * Demonstrate the benefit of point cloud sequences in V2X collaborative perception * Extending our previous work <cit.> to further boost single-vehicle object detection accuracy and achieves more accurate scene flow prediction * Performing extensive evaluations on NuScenes <cit.>, KITTI <cit.>, and V2X-Sim <cit.> datasets to verify our method § RELATED WORKS As described in the previous section, the main challenge of V2X cooperative perception is the performance-bandwidth tradeoff which establishes a solution spectrum ranging from early to late collaboration. In the early collaboration approach, as depicted in Fig.<ref>, agents exchange their raw measurements - point clouds. At every timestep, the ego vehicle concatenates its own point cloud with those obtained by other agents to form the input for its perception model. This combined point cloud offers a comprehensive view of the scene with minimal occlusion and sparsity thanks to the diverse perspectives of connected agents, thus being regarded as the upper bound of the performance of the V2X collaborative perception <cit.>. However, due to the significant amount of bandwidth required to transmit raw point clouds (the order of 10 MB), the early collaboration strategy is not feasible for real-world deployments. On the other extreme of the performance-bandwidth tradeoff, the late collaboration approach focuses on minimizing bandwidth usage by exchanging only the agents' output, specifically the detection results in the form of 3D bounding boxes. As illustrated in Fig.<ref>, each agent independently detects objects using its own point cloud. Subsequently, the agent merges its own predictions with those made by others to generate the final output. While this strategy is more feasible for real-world deployments thanks to its minimal bandwidth consumption, it exhibits significantly lower performance gains compared to early fusion. The limited interaction among agents in the late collaboration approach contributes to this inferior performance. In noisy environments where latency is a factor, the late collaboration strategy even underperforms single-vehicle perception <cit.>. The mid-collaboration strategy aims to find a balance between performance and bandwidth consumption by exchanging intermediate scene representations generated by the backbone of the agents' perception model. The motivation behind this approach is that the intermediate scene representation contains more contextual information compared to the final output (i.e., 3D bounding boxes), enabling deeper interaction among agents. Moreover, this representation is more compact than raw point clouds since it has been reduced in size through a series of convolution layers in the backbone and can be further compressed using an autoencoder to minimize bandwidth usage. While the idea is elegant, implementing the intermediate collaboration strategy requires a range of modules, shown in Figure <ref>, including compressor, decompressor, and representation fusion, among others, to match the performance of early collaboration. The fusion, in particular, is quite intricate as it involves learnable collaboration graphs using techniques such as Graph Neural Networks <cit.> or Transformers <cit.> to effectively fuse exchanged representations. In addition, dedicated modules are required to account for different practical challenges. For example, <cit.> use the Spatial Transformer <cit.> to resolve the global misalignment between the ego vehicle's representation and others' caused by the ego vehicle's motion between when it makes the query and when it receives exchanged information. In <cit.>, the Fused Axial Attention <cit.> is used to bridge the domain gap between representations made by different detection models (e.g., PointPillar <cit.> and VoxelNet<cit.>) used by different agents in the V2X network. Finally, most mid-collaboration methods make strong assumptions about inter-agent synchronization which is either (i) perfect synchronization where exchanged representations always share the same timestamp <cit.> or (ii) synchronized point cloud acquisitions, meaning all agents obtain and process point clouds at the same rate and at the same time <cit.>. Because of these complexities, the real-world deployment of intermediate fusion remains challenging. In this work, we aim to resolve the aforementioned complexities of mid-collaboration to obtain a practical framework for V2X collaborative perception. Our design is grounded in minimal bandwidth consumption as this is nonnegotiable in real-world deployment. To achieve this goal, we choose object detection, in the form of 3D bounding boxes, as the information to be exchanged, which is similar to the late collaboration strategy. We further relax the assumption on inter-agent synchronization to agents sharing a common time reference (e.g., GPS time) and acknowledge that agents produce detections at different rates. As a result, exchanged detections always have older timestamps compared to the timestamp of the query made by the ego vehicle, thus risking a spatial misalignment between exchanged detections and their associated ground truth (if detections are true positive). We resolve this misalignment by predicting objects' velocity simultaneously with their locations by pooling point-wise scene flow which can be produced by integrating our Aligner module <cit.> to any off-the-shelf BEV-based object detectors. Finally, we avoid the inferior performance of late collaboration by devising a new collaboration strategy that fuses exchanged detections with the ego vehicle's raw point cloud for subsequent processing by its detection model rather than fusing exchanged detections with detections made by the ego vehicle. An illustration of our approach is presented in Fig.<ref>. The innovation of our collaboration strategy lies in our recognition of the similarity between object detection using point cloud sequences and collaborative detection. In both cases, there is a need to fuse information obtained from diverse perspectives. Point cloud sequences involve capturing the motion of the ego vehicle, which results in varying viewpoints, while collaborative detection entails incorporating insights from other agents present in the environment. By drawing this parallel, we leverage the shared principle of fusing information from multiple perspectives to enhance the accuracy and robustness of both object detection approaches § METHODOLOGY Since the scene flow plays a prominent role in our V2X collaboration framework which is to propagate past detections to the present, this section first recalls our previous work <cit.> and presents the extension we make to improve the accuracy of its scene flow prediction. Next, our strategy for V2X collaborative perception that enables using asynchronous exchanged information is derived. §.§ Learning Scene Flow As mentioned in Sec.<ref>, the accuracy of single-vehicle detection can be significantly improved by using the concatenation point cloud sequences as inputs. However, such concatenation is done by the Ego Motion Compensation method which only accounts for the motion of the ego vehicle resulting in the shadow effect, as can be seen Fig.<ref>, in the concatenated point cloud. This effect, which is a misalignment in the 3D space between objects' points and their locations, leads to a misalignment in feature spaces (e.g., BEV representations), thus degrading detection accuracy for dynamic objects. In this section, we recall the approach we use in <cit.> to handle this effect and present the extensions we make to it. §.§.§ The Cause of The Shadow Effect Let 𝒫^t = {𝐩_j^t = [x, y, z, r, Δ]   |   j = 1, …, N } denote a point cloud collected by the ego vehicle at time step t. Here, each point 𝐩_j has two features namely reflectant r and time-lag Δ with respect to a predefined time step. The point cloud is expressed in the ego vehicle frame ℰ(t) measured with respect to a global frame 𝒢. To concatenate a point cloud sequence 𝒮 = {𝒫^t - K, 𝒫^t - K + 1, …, 𝒫^t} of length K + 1, each point of point cloud 𝐏^t - Δ t is transformed to the global frame using the ego vehicle pose at the same time step as in (<ref>), hence the method's name Ego Motion Compensation (EMC). ^𝒢𝐩_i^t - Δ t =  ^𝒢𝐓_ℰ(t - Δ t)· 𝐩_i^t - Δ t Here, ^d𝐓_s∈𝐒𝐄(3) represents the rigid transformation that maps points in frame s to frame d. EMC undoes the motion of the ego vehicle; however, it fails to account for objects' motion. As a result, the appearance of a dynamic object in the concatenated point cloud comprises several instances, each corresponding to the object's poses in the global frame at a particular time step, as in Fig.<ref>. Such distortion can be rectified by first transforming an object's points collected at different time steps to its body frame, and then from to the global frame at the desired time step. This two-step transformation is illustrated in (<ref>). ^𝒢𝐩̂^t - Δ t =  ^𝒢𝐓_𝒪(t)·  ^𝒪(t - Δ t)𝐓_𝒢·  ^𝒢𝐩^t - Δ t Here, 𝒪(t) represents the object frame at time t. The point's subscript is omitted for clarity. Equation (<ref>) implies that ^𝒪(t)𝐓_𝒪(t - Δ t) is the identity transformation, which is the case for rigid objects. More importantly, the computation of the rectification transformation, ^𝒢𝐓_𝒪(t)·  ^𝒪(t - Δ t)𝐓_𝒢, requires object poses which are not available at test time. Our previous work <cit.> develops a model that estimates such transformation directly from the concatenation of point cloud sequences. The following section briefly recalls the method used in <cit.> and presents our extension. §.§.§ Learning to Rectify the Shadow Effect In <cit.>, we propose a drop-in module, named Aligner, for 3D object detection models that create a Bird-Eye View (BEV) representation, 𝐁∈ℝ^C × H × W, of the scene as an intermediate output. Our module first computes the feature of 𝐟_i ∈ℝ^C of a point 𝐩_i by bilinear interpolating on 𝐁 using its projection to the BEV. Each point feature is then decoded into the point's scene flow 𝐝_i ∈ℝ^3 representing how much this point needs to offset to rectify the shadow effect. After being rectified, the new point cloud is used to scatter the set of point features {𝐟_i} to the BEV to obtain a new BEV image 𝐁'. While undergoing minimized impact of the shadow effect thanks to the rectification, 𝐁' is sparse as only occupied pillars have features, which has a negative effect on detection accuracy <cit.>. On the other hand, 𝐁 is semi-dense thanks to the occupancy leakage caused by the convolutions in the detection models' backbone but possesses feature misalignment due to the shadow effect. To utilize the best of the two representations, they are fused before being fed to the Region Proposal Networks (RPN) for 3D bounding boxes. The complete pipeline is summarized in Fig.<ref>. The learning target 𝐝^* for the scene flow 𝐝 of a point 𝐩^t - Δ t, which has the timestamp (t - Δ t) (i.e., collected at time step t - Δ t), is the difference between its location and its rectified location computed by (<ref>). 𝐝^* =  ^𝒢𝐩̂^t - Δ t -   ^𝒢𝐩^t - Δ t A 3D bounding box is parameterized by a seven-vector comprised of its center coordinate [c_x, c_y, c_z], its size [l, w, h], and its heading θ. The learning target is defined according to the framework which the RPN follows (e.g., anchor-based <cit.> or center-based <cit.>). §.§.§ Aligner++ To improve the accuracy of <cit.> in estimating scene flow, thus ultimately improving object detection, we introduce two extensions (i) incorporating HD Map and (ii) distilling a model trained on the concatenation of point cloud sequences by the ground truth trajectories. Previous works in integrating HD Maps into 3D object detection models <cit.> opt for a mid-fusion approach that concatenates rasterized maps with backbone-made BEV representations. However, this approach is incompatible with copy-paste data augmentation <cit.>, which randomly samples ground truth objects from a database and pastes them to each point cloud used for training, as pasted objects do not necessarily adhere to the semantics and geometry of the map (e.g., cars appear inside a building). Moreover, they only utilize the map's binary channels such as drivable areas, sidewalks, or car parks while omitting the lane direction which is potentially helpful for estimating scene flow and objects' heading direction. Aware of these two limitations, we propose to extract the map feature for each 3D point and use it to augment the point's raw features prior to further processing (e.g., building a ground truth database or computing a BEV representation for object detection). The map feature extraction process is illustrated in Fig.<ref> where points' coordinate in the map's BEV is used for nearest neighbor interpolation. This attachment of the map features to points results in an early fusion between HD Maps and LiDARs, rendering the concatenation of maps' channels to backbone-made BEV representations unnecessary. As a result, HD Maps are made compatible with copy-paste data augmentation. To improve the quality of the fused BEV representation, 𝐁^f in Fig.<ref>, in terms of minimizing the feature misalignment caused by the shadow effect and reducing the sparsity, we use the teacher-student framework shown in Fig.<ref>. In detail, we use ground truth objects' trajectories to rectify the concatenation of point cloud sequences and use the shadow-effect-free results, illustrated in Fig.<ref>, to train a teacher model. After the teacher converged, its BEV representation 𝐁^t is used to guide the student's fused BEV representation 𝐁^f by optimizing the students' weight so that the difference measured by the 𝐿_2 loss between these two representations is minimized. §.§ V2X Collaborative Perception Framework The V2X setting that we target in this paper comprises multiple CAVs and IRSUs which are collectively referred to as agents. An agent 𝒜_i is equipped with a LiDAR to localize in a common global frame 𝒢 and detect objects in its surrounding environment. The detection is based on the processing of point cloud sequences using a detection model made of the integration of the Aligner++ presented in the Sec.<ref> into an off-the-shelf single-frame object detector such as PointPillar <cit.>. At a time step t_i, agent 𝒜_i uses a K-point-cloud sequence 𝒮_i = {𝒫^t_i - K + 1_i, …, 𝒫^t_i_i} as an input to its detection model. An object b_i, j detected by agent 𝒜_i is parameterized by a nine-vector [x, y, z, w, l, h, θ, s, c]. The first seven numbers localize the object by its center location [x, y, z], size [w, l, h], heading direction θ. The last two numbers s and c respectively denote confidence score s and the predicted class c. Upon receiving a query having timestamp t, agent 𝒜_i will communicate its detection ℬ_i^t_i = {𝐛_i,j}_j=1^M_i and metadata produced timestamp t_i that is prior to and closest to t. The agent's metadata produced at a timestamp t_i is made of the timestamp itself and the agent's pose ℰ_i(t_i) at this timestamp. Given this setting, we aim to enhance the ego vehicle's capacity of detecting objects by fusing its point cloud sequence 𝒮_e = {𝒫^t - K + 1_e, …, 𝒫^t_e} with the MoDAR interpretation of predictions ℬ_i^t_i made by other agents. The following section provides an overview of the origin of MoDAR and presents in detail how we adapt this concept to collaborative perception via the V2X context. §.§.§ MoDAR for Object Detection on Point Cloud Sequences MoDAR is created to enable detecting objects in extremely long point cloud sequences (hundreds of frames). In the MoDAR framework, a sequence is divided into several short sequences where objects are detected by a single-frame detector, tracked by a simple multi-object tracktor (e.g., <cit.>). Then, the data-driven motion forecasting model MultiPath++<cit.> predicts objects' future poses based on objects' trajectories established by the tracktor. Using prediction about objects' future poses, detected objects in each short subsequence are propagated to the desired time step (e.g., the present). Next, each propagated object, which is represented as an up-right 3D bounding box (parameterized by the location of its center, size, and heading) with a confidence score and a class, is interpreted into a 3D point that takes the box's center as its coordinate and the box's size, heading, confidence score and class as features. These points interpreted from 3D bounding boxes are referred to as MoDAR points. They enable packing an entire subsequence into a small number of points, thus enabling an efficient fusion of extremely long point cloud sequences. §.§.§ V2X Collaboration using MoDAR Points We draw the following similarity between single-vehicle object detection on point cloud sequences and V2X collaborative detection: these two tasks share the common challenge of finding an effective method for fusing information obtained from different perspectives caused by the motion of the ego vehicle in the case of point cloud sequences and the presence of other agents in the case of collaboration via V2X. Based on this observation, we use MoDAR points as the medium for conveying information among agents in the V2X network. Specifically, we interpret an object detected by agent 𝒜_i as a 3D bounding box 𝐛_i,j = [x, y, z, w, l, h, θ, s, c] to a MoDAR point 𝐦_i,j by assigning * [x, y, z] to the coordinate of 𝐦_i,j * [ w, l, h, θ, s, c] to 𝐦_i,j's features The challenge in our V2X setting is that different agents detect objects at different rates, thus forcing the ego vehicle to utilize MoDAR points made by other agents at passed time steps. This timestamp mismatch results in a spatial misalignment between exchanged MoDAR points and ground truth dynamic objects, which can diminish the benefit of collaboration or even decrease the ego vehicle's accuracy. This challenge is encountered in the context of single-vehicle detection on point cloud sequences as well because dynamic objects change their poses from one subsequence to another. <cit.> resolves this by predicting objects' pose using a multi-object tracktor and the motion forecasting model MultiPath++. While this is feasible in the V2X context, the implementation of add-on modules for future pose prediction does not align with our design target of minimal architecture. Instead, we use the scene flow to propagate MoDAR points from a passed timestep to the timestep queried by the ego vehicle. Since MoDAR points are virtual, their scene flow is not estimated directly from a point cloud sequence but is aggregated from the scene flow of points residing in the box they represent. To be concrete, let 𝐦_i, j be a MoDAR point representing a 3D bounding box B_i, j detected by agent 𝒜_i at timestep t_i. 𝐩_i, h and 𝐟_i, h respectively denote a real 3D point in the concatenation of agent 𝒜_i's point cloud sequence 𝒮_i and its predicted scene flow. The scene flow 𝐟^𝐦_i, j of 𝐦_i, j is computed by 𝐟^𝐦_i, j = mean{𝐟_i, h  |  𝐩_i, h∈B_i, j} Once its scene flow is obtained, the MoDAR point 𝐦_i, j is propagated to the timestep t queried by the ego vehicle as following [𝐦̂_i, j]_x,y,z = [𝐦_i, j]_x,y,z + t - t_i/|𝒮_i| 𝐟^𝐦_i, j Here, |𝒮_i| denotes the length of the point cloud sequence 𝒮_i measured in seconds. [·]_x,y,z is the operator that extracts 3D coordinate of a MoDAR point. Finally, propagated MoDAR point is transformed from the agent 𝒜_i's pose ℰ_i(t_i) at time step t_i to the ego vehicle pose ℰ_e(t) at time step t using the localization in the common global frame 𝒢 of the two agents ^ℰ_e(t)[𝐦̂_i, j]_x,y,z =  ^𝒢𝐓^-1_ℰ_e(t)  ^𝒢𝐓_ℰ_i(t_i)  [𝐦̂_i, j]_x,y,z The concatenation between the set of MoDAR points received from other agents and the ego vehicle's raw point cloud (resulting from concatenating its own point cloud sequence) is done straightforwardly by padding * points in the ego vehicle's raw point cloud with null vectors representing features of MoDAR points, which are boxes' size, heading, score, and class * MoDAR points with null vectors representing features of points in the ego vehicle's raw point cloud which are points' intensity and time-lag. Once exchanged MoDAR points and the point cloud of the ego vehicle are merged, the result can be processed by the single-vehicle model developed in the Sec.<ref> without changing its architecture. § EXPERIMENTS AND RESULTS §.§ Single-Vehicle Perception §.§.§ Datasets and Metrics As point clouds' resolution has a large impact on the performance of LiDAR-based models, we test our model on three resolutions: 16, 32, and 64. While 32-channel and 64-channel point clouds are readily available in NuScenes dataset <cit.> and KITTI dataset <cit.> respectively, to the best of our knowledge, there are no publicly available datasets containing 16-channel point clouds. As a result, we synthesize a 16-channel dataset from the NuScenes dataset using the downsampling approach of <cit.>. The NuScenes dataset is made of 850 20-second scenes split into 700 scenes for training and 150 scenes for validation. Each scene comprises data samples collected by a multimodal sensor suite including a 32-beam LiDAR operating at 20 Hz. In NuScenes' convention, a keyframe is established once all sensors are in sync which happens every half a second. For each keyframe, objects are annotated as 3D bounding boxes. In addition, each object is assigned a unique ID that is kept consistent throughout a scene which enables us to generate the ground truth for the rectification transformation and scene flow. The downsampling of each point cloud in the NuScenes dataset to synthesize a 16-channel one is done by first identifying its points' beam index via K-mean (K is 32 in the case of NuScenes) clustering on their azimuth coordinate, then assigning an equivalent beam index with respect to 16-channel LiDAR. More details can be found in <cit.>. Hereon, we refer to this synthesized dataset as NuScenes-16. The KITTI dataset's 3D Object Detection partition contains 7481 and 7581 samples for training and testing. Each sample comprises sensory measurements collected by a 64-beam LiDAR operating at 10Hz and several cameras. A common practice when working with KITTI is to split the original training data into 3712 training samples and 3769 validation samples for experimental studies. A challenge we encounter when using KITTI is that its 3D Object Detection partition contains temporally disjointed samples, thus being not straightforward to obtain input (point cloud sequences) and ground truth (scene flow) for our model. We resolve this challenge by matching samples in the 3D Object Detection partition with KITTI's raw sequences. Since there are a few raw sequences that do not have tracklet annotation, meaning objects are not tracked, we only retain data samples whose associated raw sequences have tracklets (i.e., trajectories of objects) in the training set and validation set. Metrics We use mean Average Precision (mAP) to measure the performance of our model on the 3D object detection task. A prediction is matched with the closest ground truth, measured by an affinity. A match is considered valid if the affinity is below a predefined threshold. For each threshold, the average precision is obtained by integrating the recall-precision curve for recall and precision above 0.1. The mAP is the mean of the average precision of the threshold set. For NuScenes, the affinity is the Euclidean distance on the ground plane between centers of predictions and ground truth. This distance has four thresholds: 0.5, 1.0, 2.0, and 4.0. For KITTI, the affinity is the Intersection-over-Union (IoU) in the BEV plane. Unlike NuScenes, KITTI uses only one affinity threshold for each class which is 0.7 for cars and 0.5 for pedestrians. The evaluation of scene flow prediction is based on the set of standard metrics proposed by <cit.> which includes End-Point Error (EPE), strict/ relaxed accuracy (AccS/ AccR), and outlier (ROutliers). The EPE is the Euclidean distance between the predicted scene flow and their ground truth average over the total number of points. The AccS/ AccR is the percentage of points having either EPE < 0.05/ 0.10 meters or relative error < 0.05/ 0.10. The ROutliers is the percentage of points whose EPE > 0.30 meters and relative error > 0.30. §.§.§ Experiments and Results The implementation of the single-vehicle perception model here follows the implementation made in our previous work <cit.> which uses the concatenation of 0.5-second point cloud sequences by EMC as the input. Since NuScenes and KITTI respectively obtain point clouds at 20 and 10 Hz, a 0.5-second sequence contains 10 and 5 point clouds. We use the same architecture, which uses PointPillar <cit.> as its backbone and CenterHead <cit.> as its RPN, for every experiment. In experiments on NuScenes, the detection range is limited to [-51.2, 51.2] on the XY plane and [-5.0, 3.0] along the Z axis. For experiments in KITTI, point clouds are first cropped using the left-color camera's field of view as the annotations are only provided for objects that lie within the left-color camera's image. The detection range on KITTI dataset is [0, 69.12] × [-39.68, 39.68] × [-3, 1] along X, Y, and Z axis. Our implementation is based on the OpenPCDet <cit.>. The details on the model's hyperparameters can be found in our code release. The evaluation of the Aligner++ on scene flow estimation task is done using the NuScenes dataset. The result shown in Tab.<ref> indicates a significant improvement compared to our previous work <cit.> that reaches a state-of-art on accuracy-related metrics namely AccS, AccR, and ROutliers while maintaining low inference time. However, we have relatively high EPE this is because the evaluation is done for every point in the point cloud. This means the evaluation is carried out for both ground truth foreground points which have associated ground truth scene flow (can be nonzero or zero depending on dynamic) and ground truth background points which we assign the null vector as ground truth scene flow. Since the classification module of the Object Head inevitably makes false positive/ false negative foreground predictions, a number of background/ foreground points are predicted to have nonzero/ zero scene flow. Even though the portion of false predictions is small, as indicated by accuracy (AccS, AccR) and outlier metrics, the magnitude of their error is sufficiently large, due to the fact either the prediction or ground truth is zero, thus resulting in a large EPE. The better scene flow estimation made by the results in a higher detection accuracy on the NuScenes dataset which can be seen in Tab.<ref>. The integration of Aligner++, made of HD Map and distillation using the teacher-student framework, improves the mAP average over 10 classes of objects of a plain PointPillar by 6 points which is almost double the gain brought by Aligner (3.4 points). Interestingly, the distillation is responsible for most (5.6 out of 6 points) of the success of the Aligner++. The robustness of our Aligner++ is demonstrated by experiments on the KITTI and the synthetic NuScenes-16 dataset. As can be seen Tab.<ref> and Tab.<ref>, the integration of Aligner++ consistently improves the accuracy of detecting objects in point cloud sequences. A common point between these two tables is that performance gain amounts to the Aligner++ is larger for vehicle-like classes (e.g., car, truck, or trailer). This is because the scene flow ground truth is generated using the rectification transformation (<ref>) which is established based on the rigid motion assumption. This assumption holds for vehicle-like classes while being an oversimplification for pedestrians. As a result, the estimation of scene flow for vehicle-like classes is more accurate, thus higher detection accuracy. Another important result can be drawn from Tab.<ref> and Tab.<ref> is that the effectiveness, in terms of detection accuracy, of using point cloud sequences over single point clouds persists on various LiDAR resolutions. §.§ V2X Collaborative Perception - Experiments §.§.§ Dataset and Metric To evaluate our collaboration framework, we use the V2X-Sim 2.0 <cit.> which is made using CARLA <cit.> and the traffic simulator SUMO <cit.>. This dataset is made of 100 100-frame sequences of traffic taking place at intersections of three towns of CARLA which are Town 3, Town 4, and Town 5. Each sequence contains data samples recorded at 5 Hz. Each data sample comprises raw sensory measurements made by the ego vehicle, one to four CAVs, and an IRSU, which is placed at an elevated position that has a large minimally occluded field of view of the intersection. Every vehicle and the IRSU are equipped with a 32-channel LiDAR. All agents are in sync which results in the same timestamp of data that they collect. The V2X-Sim 2.0 dataset provides object annotations for each data sample. The official training, validation, and testing split are made of temporally disjoint data samples from three towns chosen such that there is no overlap in terms of intersections. Since we need point cloud sequences as input to our models, we can't use the official splits. Instead, we use sequences in Town 4 and Town 5 as the training set, and those in Town 3 as the validation set, thus ensuring there is no intersection overlap. This choice results in an 8900-data-sample training set and an 1100-data-sample validation set. Since this dataset follows the format of the NuScenes, we use NuScenes' implementation of mean Average Precision (mAP) (details in Sec.<ref>) to measure the performance of our framework and baselines. §.§.§ Implementation, Experiments and Results In the convention of the V2X-Sim dataset, the IRSU and the ego vehicle are respectively assigned the id of 0 and 1 while other CAVs get id ranging from 2 to 5. To test our collaboration method's ability to handle asynchronous exchanged information, we set the time lag between the timestamp t of the ego vehicle's query and the timestamp t_i of the detection ℬ_i = {𝐛_i, j} made by agent 𝒜_i (i ∈{0, 2, 3, 4, 5}) to the time gap between two consecutive data sample of V2X-Sim which is 0.2 seconds. We benchmark our approach to collaborative perception against two extremes of the performance-bandwidth spectrum which are Late and Early collaboration. Late Collaboration achieves the minimal bandwidth usage by fusing the detection ℬ_1 = {𝐛_1, j}_j=1^|ℬ_1| made by the ego vehicle with the detection made by other agents {ℬ_i | i ∈{0, 2, 3, 4, 5}} using Non-Max Suppression. We evaluate this baseline under three settings including * asynchronous exchange where the gap between t_i and t is 0.2 seconds as described above * asynchronous exchange with ℬ_i propagated from t_i to t using scene flow by the procedure described in section <ref> * synchronous exchange where there is no gap between t and t_i On the other hand, Early Collaboration reaches high performance by exchanging the entire raw point cloud sequences 𝒮_i = {𝒫^t_i - K + 1_i, …, 𝒫^t_i_i} collected by each agent 𝒜_i is the second baseline. To explore the upper bound of V2X collaborative perception, we only apply the synchronous exchange to this baseline. In the implementation of our V2X collaboration framework, every agent uses the single-vehicle detection model that is developed in Sec.<ref> with the absence of the teacher module. The architecture and hyper-parameters are kept unchanged as the experiments of the single-vehicle model in Sec.<ref>. While previous works using the V2X-Sim dataset <cit.> set the detection range to [-32, 32] meters along the X and Y axis, centered on the ego vehicle, we extend this range to [-51.2, 51.2] to better demonstrate the performance gain thanks to collaborative perception via V2X. To verify the ability of enhancing single-vehicle perception of V2X collaboration, we evaluate our approach and baselines in the setting where ground truths are made of objects visible to the ego vehicle meaning their bounding boxes contain at least one point of the point cloud 𝒫_e^t, which the ego vehicle obtained at the time step of query t. The result of this evaluation is summarized in Tab.<ref> which shows that late collaboration in all three settings is not beneficial, as the best late collaboration is 4.42 mAP behind the single-agent perception. This is because True Positive (TP) detections made by other agents are counted as false positives if they are not visible to the ego vehicle. In addition, ill-localized but overly confident detections made by other agents can suppress good detections made by the ego vehicle, thus further reducing the overall performance. Remarkably, our collaboration approach using MoDAR points outperforms the early collaboration by 3.35 mAP while consuming the same amount of bandwidth as late collaboration. This reaffirms our design philosophy that a good mutli-agent collaborative perception framework can be made on the foundation of a good single-agent perception model and a simple collaboration method. Beside enabling the ego vehicle more accurately detecting objects that are visible to itself, another great benefit of collaborative perception is to help the ego vehicle see the invisible objects - those do not contain any points of its point cloud 𝒫_e^t, thus overcoming the challenge of occlusion and sparsity. To demonstrate this, we perform the relax the criterion of eligibility of ground truth such that they only need to contain at least one point emitted by any agents in the V2X network. The performance shown in Tab.<ref> indicates that on a larger set of ground truths, Late Collaboration does improve performance, compared to single-vehicle (no collaboration). The improvement is significant (at least 8.35 mAP) even in the poorest setting where the set of exchanged detection {ℬ_i | i ∈{0, 2, 3, 4, 5}} is 0.2 seconds behind the time of query, thus resulting a spatial misalignment between detected objects that are exchanged and their associated ground truths if the underlying objects are dynamic. This can be explained by a significant number of static and slow-moving objects present in an intersection whose past detections remain true positive at the present. When detections of dynamic objects are better accounted for as in the sync setting where agents exchange their detections at the same timestamp as the query, the performance is largely improved by 9.29 mAP, compared to the async setting. However, this sync setting is unrealistic because different agents have different detection rates. Interestingly, propagating detections using scene flow as in the async prop. setting can reach 96.20% the performance of the sync setting (2.68 mAP behind). This implies the effectiveness of scene flow estimated by our Aligner++. In this setting, our collaboration method using MoDAR no longer outperfroms the early collaboration baseline as some distant objects (with respect to the ego vehicle) are inevitably missed by connected agents, thus remaining invisible to the ego vehicles and being unable to be detected. However, our method still maintain favorable performance of 98.2% of the early collaboration's while attaining as low bandwidth usage as late collaboration. IEEEtran
http://arxiv.org/abs/2307.00474v1
20230702050338
Moments, Random Walks, and Limits for Spectrum Approximation
[ "Yujia Jin", "Christopher Musco", "Aaron Sidford", "Apoorv Vikram Singh" ]
cs.DS
[ "cs.DS", "cs.LG" ]
Query-Efficient Decision-based Black-Box Patch Attack Zhaoyu Chen, Bo Li, Shuang Wu, Shouhong Ding, Wenqiang Zhang Corresponding authors are Bo Li and Wenqiang Zhang. Zhaoyu Chen and Wenqiang Zhang are with Academy for Engineering and Technology, Fudan University, Shanghai, China, and also with Yiwu Research Institute of Fudan University, Yiwu, China. The emails of these authors are: [email protected], [email protected], [email protected]. =================================================================================================================================================================================================================================================================================================================================================================================================================================== We study lower bounds for the problem of approximating a one dimensional distribution given (noisy) measurements of its moments. We show that there are distributions on [-1,1] that cannot be approximated to accuracy ϵ in Wasserstein-1 distance even if we know all of their moments to multiplicative accuracy (1±2^-Ω(1/ϵ)); this result matches an upper bound of Kong and Valiant [Annals of Statistics, 2017]. To obtain our result, we provide a hard instance involving distributions induced by the eigenvalue spectra of carefully constructed graph adjacency matrices. Efficiently approximating such spectra in Wasserstein-1 distance is a well-studied algorithmic problem, and a recent result of Cohen-Steiner et al. [KDD 2018] gives a method based on accurately approximating spectral moments using 2^O(1/ϵ) random walks initiated at uniformly random nodes in the graph. As a strengthening of our main result, we show that improving the dependence on 1/ϵ in this result would require a new algorithmic approach. Specifically, no algorithm can compute an ϵ-accurate approximation to the spectrum of a normalized graph adjacency matrix with constant probability, even when given the transcript of 2^Ω(1/ϵ) random walks of length 2^Ω(1/ϵ) started at random nodes. spectral density estimation, moment methods, random walks, sublinear algorithm § INTRODUCTION A fundamental problem in linear algebra is to approximate the full list of eigenvalues, λ_1 ≤…≤λ_n ∈, of a symmetric matrix A ∈^n × n, ideally in less time than it takes to compute a full eigendecomposition.[All eigenvalues can be computed to precision ϵ in O(n^ω + η(nϵ)) time, where ω≈ 2.373 is the matrix multiplication constant <cit.>. Methods typically used in practice run in time O(n^3 + n^2log(1ϵ)) <cit.>.] We focus on the particular problem of spectral density estimation where given ϵ∈ (0, 1) and the assumption that A_2 ≤ 1, the goal is find approximate eigenvalues λ'_1≤…≤λ'_n such that their average absolute error is bounded by ϵ, i.e., 1/n∑_i=1^n |λ_i - λ'_i| ≤ϵ. This problem is equivalent to that of computing an ϵ-approximation in Wasserstein-1 distance to the distribution on [-1,1] induced by the spectral density (function) of A, i.e. p(x) 1/n∑_i=1^n δ(x-λ_i) for indicator function δ (see sec:prelims for notation). Spectral density estimation is distinct from and in many ways more challenging than related problems like low-rank approximation, where we only seek to approximate the largest magnitude eigenvalues of A. Nevertheless, efficient randomized algorithms for spectral density estimation were developed in the early 1990s and have been applied widely in computational physics and chemistry <cit.>. These algorithms, which include the kernel polynomial and stochastic Lanczos quadrature methods, achieve ϵ accuracy with high probability in roughly O(n^2/ϵ) time, improving on the Ω(n^ω) cost of a full eigendecomposition for moderate values of ϵ <cit.>. More recently, there has been a resurgence of interest in spectral density estimation within the machine learning and data science communities. Research activity in this area has been fueled by emerging applications in analyzing and understanding deep neural networks <cit.>, in optimization <cit.>, and in network science <cit.>. §.§ Spectral Density Estimation for Graphs Interestingly, when A is the normalized adjacency matrix[ If à is the unnormalized adjacency matrix of G and D is its diagonal degree matrix, we can equivalently consider the asymmetric matrix, D^-1à or the symmetric one, D^-1/2ÃD^-1/2, as they have the same eigenvalues.] of an undirected graph G, there are faster spectral density estimation algorithms than for general matrices. Specifically, assume that we can randomly sample a node from G and, given a node, randomly sample a neighbor, both in O(1) time. This is possible, for example, in the word RAM model when given arrays containing the neighbors for each node in G, and is also a commonly assumed access for computing on extremely large implicit networks <cit.>. It was recently shown that the O(n^2/ϵ) runtime of general purpose algorithms like stochastic Lanczos quadrature can be improved to Õ(n/(ϵ)) <cit.>.[We use Õ(m) to denote O(mlog m). The runtime in <cit.> can be improved by a logarithmic factor to O(n/(ϵ)) if we have access to a precomputed list of the degrees of nodes in G.] This runtime is sublinear in the size of A, e.g., when the matrix has Ω(n^2) non-zero entries. Perhaps even more surprisingly, it is possible to solve spectral density estimation for normalized adjacency matrices without any dependence on n. Suppose that we are given a weighted graph G, and again that we can randomly sample a node from G in O(1) time. Also assume that, for any given node, we can randomly sample a neighbor with probability proportional to its edge weight in O(1) time. In other words, we can initialize and take steps of an edge-weighted random walk in G in O(1) time.[To be more concrete, if a node x is connected to neighbors y_1, …, y_d with edge weights w_1, …, w_d, then the walk steps from x to y_i with probability w_i/∑_j w_j.] Then <cit.> gives an algorithm for any weighted undirected graph that solves the spectral density estimation problem with high probabilty in 2^O(1/ϵ) time[Note that <cit.> output a list of approximation eigenvalues λ'_1, …, λ'_n with only O(1/ϵ) distinct values that can be stored and returned in time independent of n.]. While completely independent of the graph size, the poor dependence on ϵ in the result of <cit.> unfortunately makes the algorithm impractical for any reasonable level of accuracy. As such, an interesting question is whether the exponential dependence on ϵ can be improved (maybe even to polynomial), while still avoiding any dependence on the graph size n. Can we solve the spectral density estimation problem for a normalized adjacency matrix A given access to 2^o(1/ϵ) steps of random walks in the associated graph? Central to this question is the connection between spectral density estimation and the problem of learning a one dimensional distribution p given noisy measurements of p's (raw) moments. In this work, we consider distributions supported on the the [-1,1], in which case these moments are: ∫_-1^1 x p(x) dx, ∫_-1^1 x^2 p(x) dx, ∫_-1^1 x^3p(x) dx, … Recent work of <cit.> shows that, for a fixed constant c, if the first ℓ = c/ϵ moments of any two distributions p and q supported on [-1,1] match exactly, then the Wasserstein-1 distance between those distributions is at most ϵ. Given that the left hand size of (<ref>) exactly equals the Wasserstein-1 distance W_1(p,q) between the discrete distributions p(x) = 1/n∑_i=1^n δ(x-λ_i) and q(x) = 1/n∑_i=1^n δ(x-λ_i'), the approach in <cit.> is to approximate the first ℓ moments of p, and then to find a set of approximate eigenvalues and eigenvalue multiplicities that correspond to a discrete distribution q with the same moments. Given the approximate moments, finding q can be done in (ℓ) time using linear programming algorithms. Computing the estimates of p's moments is more challenging. <cit.> take advantage of the fact that for any j ≤ℓ, the j^th moment of p is equal to 1/n∑_i=1^n λ_i^j = 1/n(A^j). This trace can in turn be estimated by random walks of length j in A: if we start a random walk at a random node v, the probability that we return to v at the j^th step is exactly equal to 1/n(A^j). So, we can obtain an unbiased estimate for the j^th moment by simply running random walks from random starting nodes and calculating the empirical frequency that we return to our starting point. This approach leads to the remarkably simple algorithm of <cit.>. So where does the 2^O(1/ϵ) runtime dependence come from? The issue is that the result of <cit.> is brittle to noise. In particular, if the sum of squared distances between p's moments and q's moments differ by Δ, the bound from <cit.> weakens, only showing that the Wasserstein-1 distance is bounded by O(1/ℓ + Δ· 3^ℓ). To obtain accuracy ϵ, it is necessary to set ℓ = O(1/ϵ) and thus Δ equal to 2^-O(1/ϵ). By standard concentration inequalities, to obtain such an accurate estimate to p's moments, we need to run an exponential number of random walks of length 1, …, ℓ. Accordingly, an important step towards answering ques:question1 is to understand if such extremely accurate estimates of the moments is necessary for spectral density estimation. Note that many other spectral density estimation algorithms for general matrices are also based on moment-matching. A common approach is to use randomized trace estimation methods <cit.> to estimate moments of the form ∫_-1^1 T_j(x)p(x)dx = 1/n(T_j(A)), where T_j(x) is a degree j polynomial, not equal to x^j. If T_j is the j^th Chebyshev or Legendre polynomial, then it can be shown that only (1/ϵ) accurate estimates of the first ℓ= c/ moments are needed to approximate the spectral density to ϵ error in Wasserstein-1 distance <cit.>. A natural question then is, can these general polynomial moments be estimated using random walks in time independent of n for graph adjacency matrices? Unfortunately, it is not known how to do so: the challenge is that the ℓ^th Legendre polynomial or Chebyshev polynomial has coefficients exponentially large in ℓ, so (T_j(A)) cannot be effectively approximated given a routine for approximating (A^j) for different powers j. §.§ Our Contributions In this paper, we answer ques:question1 negatively. First, we show that exponentially accurate moments are necessary for estimating a distribution in Wasserstein-1 distance, even in the special case of distributions that arise as the spectral density of a graph adjacency matrix. theoremthmmomlb For any ϵ∈ (0, 1/4], there exist weighted graphs G_1 and G_2 (see def:mom) with spectral densities p_1 and p_2, such that: * The densities are far in Wasserstein-1 distance: W_1(p_1,p_2) ≥ϵ. * For all positive integers j, moments m_j(p_1) = ∫_-1^1 x^j p_1(x) x and m_j(p_2) = ∫_-1^1 x^jp_2(x) x are exponentially close: (1-δ)m_j(p_1) ≤ m_j(p_2) ≤ (1+δ)m_j(p_1) for some δ≤ 16· 2^-1/4. thm:mom_lb shows that <cit.>'s requirement that each moment be estimated to accuracy 2^-O(1/ϵ) cannot be avoided if we want an ϵ accurate approximation in Wasserstein distance. It thus rules out a direct improvement to the analysis of the spectral density estimation algorithm from of <cit.>. In particular, even if we had a procedure that returned exponentially accurate multiplicative estimates to the moments of a graph's spectral density,[When run for O(1/δ^2) steps, the random walk method of <cit.> actually achieves a weaker moment approximation with additive rror δ. This is always greater than δ m_ℓ(p_1) because all of p_1's moments are upper bounded by 1 since it is supported on [-1,1].] and even if it returns such estimates for all of the moments (not just the first O(1/ϵ)), then we would not be able to distinguish between G_1 and G_2. Our proof of thm:mom_lb is based on a hard instance built using cycle graphs. It is not hard to show that the spectral densities of two disjoint cycles of length 1/ϵ and of one cycle of length 2/ϵ differ by ϵ in Wasserstein-1 distance. Additionally, it can be shown that the first c/ϵ moments of these graphs are exponentially close. This example would thus prove thm:mom_lb if we restricted our attention to moments of degree j ≤ c/ϵ. However, for the cycle graph, higher moments can be more informative: for example, the j^th moment for j=O(1/ϵ^2) can be shown to distinguish the cycles of different length, even when only estimated to polynomial additive accuracy. To see why this is the case, note that, since a random walk of length O(1/ϵ^2) mixes on the cycle, the probability of it returning in the shorter cycle is roughly twice that as in the longer cycle. To avoid this issue, we modify the cycle graph to diminish the value of higher degree moments. In particular, we force all high moments close to zero by creating a graph that consists of many disjoint cycles, either of length 1/ϵ or 2/ϵ, joined by a lightweight complete graph on all nodes. If weighted correctly, then any walk of length Ω(1/ϵ) will exit the cycle it starts in (via the complete graph) with high probability, and the chance of returning to its starting point can be made extremely low by making the graph large enough. At the same time, the lower moments are not effected significantly, so we can show that the graphs remain far in Wasserstein-1 distance. thm:mom_lb has potentially interesting implications beyond showing a limitation for graph spectrum estimation. For example, related to the discussion about generalized moment methods above, it immediately implies that for any ℓ, the ℓ^th Chebyshev polynomial cannot be approximated to accuracy 1/(ℓ) with a polynomial (of any degree!) whose maximum coefficient is ≤ 2^ℓ. If it could, we could use less than exponentially accurate measures of the raw moments to approximate the Chebyshev moments, and then use these moments to approximate the spectral density, following <cit.>. However, by thm:mom_lb, this is impossible. While thm:mom_lb rules out direct improvements to the moment-based method of <cit.>, it does not rule out the possibility of some other algorithm that can estimate the spectral density to ϵ accuracy using fewer random walk steps. For example, we could consider methods that use more information about each random walk than checking whether or not the last step returns to the starting node. However, our next theorem shows that, in fact, no such algorithm can beat the exponential dependence on 1/ϵ; we show that, information theoretically, 2^Ω(1/ϵ) samples from random walks started from random nodes are necessary to estimate the spectral density accurately in Wasserstein-1 distance. theoremthmtranscript For any < 1/2, no algorithm that is given access to the transcript of m, length T random walks initiated at m uniformly random nodes in a given graph G can approximate G's spectral density to ϵ accuracy in the Wasserstein-1 distance with probability > 3/4, unless m· T > 1/16· 2^1/4ϵ. While more technical, the proof of thm:transcript_main is based on the same hard instance as thm:mom_lb. The distribution 𝒟 is supported on two graphs that are ϵ far in Wasserstein distance: a collection of cycles of length 1/ϵ added to a lightweight complete graph, and a collection of cycles of length 2/ϵ added to a lightweight complete graph. We establish that, if node labels are assigned at random, the only way to distinguish between these graphs is to complete a walk around one of the cycles. We show that event happens with exponentially small probability for a random walk of any length. §.§ Open Problems and Outlook Our main results open a number of interesting directions for future inquiry. Most directly, the bound from thm:transcript_main is based on an instance involving weighted graphs. It would be great to extend the lower bound to unweighted graphs, which are common in practice. While we believe the same lower bound should hold, such an extension is surprisingly tricky: for example, replacing the lightweight complete graph in our hard instances with, e.g., an unweighted expander graph significantly impacts the spectra of both graphs, making them more challenging to analyze. A bigger open question is to extend our lower bounds to what we call the adaptive random walk model, which means that the algorithm is allowed to start a random walk either at a random node, or at any other node it wishes. Since this model allows for e.g. sampling random neighbors of any node, it is closely related to other access models. For example, up to logarithmic factors, the number of random walk steps required in the adaptive model is equal to the number of memory accesses needed when given access to data structure storing an array of neighbors for each node in the graph <cit.>. Currently, the best lower bound we can prove in the adaptive random walk model is that just Ω(1/ϵ^2) steps are necessary; we show this result in app:adaptive. Proving a lower bound exponential in 1/ϵ or finding a faster algorithm that runs in this model would be a nice contribution. Even a conjectured hard instance would be nice – currently we don't have any. Finally, we note that our graph-based lower bounds show that, with non-adaptive random walks, it is impossible to distinguish if the spectral densities of two graphs are identical or ϵ-far away in Wasserstein-1 distance with 2^o(1/ϵ) steps. Consequently this result constitutes a particular type of hardness for comparing graphs. However, one might consider other notions of graph comparison. For example, in app:spectrum-comp, we consider estimating the spectrum of the difference A_1 - A_2 between two normalized adjaceny matrices A_1 graphs A_2 corresponding to graphs G_1 and G_2 with the same node degress. We show that an 2^O(1/ϵ) upper bound is obtainable. Seeking matching upper and lower bounds for this and related problems is another interesting direction for future work. §.§ Paper Organization In sec:prelims we introduce notation and preliminaries. In sec:mom-lb we prove a lower bound for spectrum estimation based on moments, establishing  thm:mom_lb. In sec:lb-weighted we prove lower bound for spectrum estimation based on random walks, establishing  thm:transcript_main. In app:adaptive, we give an Ω(1/^2) lower bound for approximating graph spectra in the (stronger) adaptive random walk model. In app:Lengendre, we use cycle graphs to construct distributions that are 2/ℓ far in Wasserstein-1 distance and have the same first ℓ-1 moments, slightly strengthening a result from <cit.>. In app:spectrum-comp, we show a new algorithm that uses alternating random walks to estimate the spectrum of the difference of two normalized adjacency matrices. § PRELIMINARIES General notation. We use δ : → to denote the indicator function with δ(0) 1 and δ(x) 0 for all x ≠ 0. We use 1∈^n to denote the all ones vector when n is clear from context. We use ℙ[E] to denote the probability of an event E. We let E^c denote the complement of a random event E, so ℙ[E^c] = 1- ℙ[E]. Graphs and graph spectra. We consider undirected graphs G=(V, E) where each edge e∈ E has a non-negative weight w_e ∈_≥ 0. We call G unweighted when w_e = 1 for all e ∈ E. We use Ã∈^V× V_≥ 0 to denote the weighted adjacency matrix of G where Ã(v,v') = w_e if e = (v,v')∈ E and Ã(v,v') = 0 otherwise. We use D ∈^V× V_≥ 0 to denote the diagonal degree matrix of G where D is diagonal with D(v,v) ∑_e = (v,v') ∈ E w_e for all v ∈ V. We let A(G)∈^V× V denote the normalized adjacency matrix of G, i.e. A(G) D^-1/2à D^-1/2. We refer to D^-1à as the random walk matrix and note that, for degree-regular graphs, A(G) = D^-1Ã. For an n-vertex graph G, we let -1≤λ_1≤λ_2≤⋯≤λ_n≤1 be the eigenvalues of the normalized adjacency matrix A(G), and use = (G) to denote this sorted (in ascending order) eigenvalue list. We let p(x):[-1,1]→ [0,1] denote the spectral density of G, i.e., p(x) = 1/n∑_i ∈ [n]δ (x - λ_i), which is the density of the distribution on [-1,1] induced by λ_i (for brevity, we do not distinguish between spectral density and the distribution it induces). We use m_j(p) to denote the j^th moment of p, i.e., m_j(p) = 1/n(A(G)^j). Wasserstein distance. In this work, we consider the standard Wasserstein-1 distance between distributions, which we may simply refer to as the Wasserstein distance for brevity. The Wasserstein-1 distance W_1(p_1, p_2) between two distributions, p_1 and p_2, supported on the real line is defined as the minimum cost of moving probability mass in p_1 to p_2, where the cost of moving probability mass from value a to b is |a-b|. Concretely, let Ψ be the set of all couplings ψ(x,y) between p_1 and p_2, i.e., Ψ contains all joint distributions ψ(x,y) over x∈ and y∈ with marginals equal to p_1 and p_2. Then: W_1(p_1, p_2) = min_ψ∈Ψ∫_∫_ |x-y|·ψ(x,y) x y A well known fact is that the Wasserstein-1 distance has a dual characterization. Specifically, [Kantorovich-Rubinstein Duality <cit.>] W_1(p_1,p_2) = sup_f: 1-Lipschitz∫_ f(x) · (p_1(x) - p_2(x)) x . Above, the supremum is taken over all 1-Lipschitz functions f, i.e., that satify |f(a) - f(b)| ≤ |a - b| for all a,b∈. Overloading notation, for graphs G_1 and G_2 with spectral densities p_1 and p_2 respectively, we let W_1(G_1,G_2) W_1(p_1,p_2) to denote the Wasserstein-1 distance between p_1 and p_2. We note that, for any two n-vertex graphs G_1 and G_2, it can be checked (see, e.g. <cit.>) that: W_1(G_1,G_2) = 1/n(G_1) - (G_2)_1 Access models. As discussed in the introduction, we consider several possible data access models for estimating the spectral density of a normalized graph adjacency matrix, A(G) for G = (V,E,w). First, we consider algorithms that, for some integer j ≥ 0 and accuracy parameter δ, have access to δ-accurate approximations, m̃_1, …, m̃_j, to the first j moments of G's spectral density p, m_1(p), …, m_j(p). Specifically, we have that |m̃_j - m_j(p)| ≤δ· m_j(p). A natural generalization of the setting where approximate moments are available is to consider algorithms that access G via random walks, since repeated random walks can be used to approximate moments <cit.>. In this work, we primarily consider a non-adaptive random walk model, where the algorithm can run m random walks each of length T ≥ 1, starting at m vertices v_0^(1), …, v_0^(m) chosen uniformly at random from G. For each walk, the algorithm can observe the entire sequence of vertex labels visited in order We call this information the the walk “transcript” and denote the set of transcripts by S={S_1,⋯, S_m}. Note that, at vertex v, the probability that the next vertex in the random walk is equal to v' is the (v,v') entry of D^-1Ã. In app:adaptive, we also consider the richer random walk model that we refer to as the adaptive random walk model where the algorithm can choose the starting node v_0^(1), …, v_0^(m). This is in contrast to the non-adaptive random walk model where starting nodes are uniformly random. Cycle spectra. Our lower bound instances in this paper involve collections of cycle graphs. We let R_c denote an undirected cycle graph of length c, and we let R_c^k denote a collection of k such cycles. Recall that we use A(R_c^k) to denote the normalized adjacency matrix and (R_c^k) for a sorted list of eigenvalues for the normalized adjacency matrix. We leverage the following basic lemma on the spectrum of cycle graphs. For any odd integer ℓ, the eigenvalues of A(R_ℓ) are cos(2k/ℓπ) with multiplicity 2 for 0 < k < ℓ/2 and 1 with multiplicity 1. The eigenvalues of A(R_2ℓ) are cos(k/ℓπ) with multiplicity 2 for 0 < k < ℓ and ± 1 each with multiplicity 1. Further, we have W_1(R^2_ℓ,R_2ℓ) = 1/ℓ. The eigenvalues of the normalized adjacency matrix of cycle graphs are well known and can be found, e.g., in  <cit.>. The Wasserstein distance immediately follows since we have: (R_ℓ^2)-(R_2ℓ)_1 = 1 - cos( π /ℓ) + cos(2 π /ℓ) - cos(π /ℓ) + … + - 1 - cos (π (ℓ-1) / ℓ) = 1-cos( π /ℓ)+cos(π /ℓ)-cos(2 π /ℓ)+⋯+ cos (π (ℓ-1) / ℓ)-(-1) = 2 The first j<ℓ moments of the spectral density of R^2_ℓ and R_2ℓ are the same. This is true because the number of ways a walk of length j < ℓ can return to its starting node is the same in both R^2_ℓ and R_2ℓ: 2 ·jj/2 for even j and 0 for odd j. § LIMITS ON MOMENT ESTIMATION METHODS In this section, we construct two weighted graphs G_1, G_2 with a same number of vertices, i.e., |V_1| = |V_2|, that we prove are -far in Wasserstein distance but have exponentially close moments. We detail the construction in the definition below. G_1 is constructed by starting with a collection of 2nℓ isolated vertices and 2n disjoint cycles, each of size ℓ. G_2 is constructed by starting with a collection 2nℓ isolated vertices and n disjoint cycles, each of size 2ℓ. In both graphs, the edges in the cycle have weight 1/4 and every vertex in a cycle is then connected to all other cycle vertices with weight 1/(4nℓ) (including a self-loop); the isolated vertices only have self-loop with weight 1. We choose ℓ to be an odd number and let n = ⌈ 2^ℓ/4⌉. Note that each graph has 4nℓ vertices. See fig:mom_graph for a visual representation of the construction from def:mom and fig:mom_plot for a plot of the spectra of G_1 and G_2. We bound the Wasserstein distance between these spectra below. For weighted graphs G_1, G_2 constructed in def:mom, W_1(G_1,G_2) = 1/(4ℓ). Let 𝐈 denote a 2nℓ× 2nℓ identity matrix. The normalized adjacency matrices of the two graphs are A(G_1) = [ 1/2· A(R_ℓ^2n) + 1/2·1/2nℓ·11^⊤ 0; 0 𝐈 ], and A(G_2) = [ 1/2· A(R_2ℓ^n) + 1/2·1/2nℓ·11^⊤ 0; 0 𝐈 ] Recall that we use R_ℓ^2n to denote the graph of 2n disjoint cycles of size ℓ, and R_2ℓ^n to denote the graph of n disjoint cycles of size 2ℓ, respectively. Additionally, recall we use (G_1) and (G_2) to denote the sorted (in ascending order) eigenvalues of A(G_1) and A(G_2), and (R_ℓ^2n) and (R_2ℓ^n) for the sorted eigenvalue list of A(R_ℓ^2n) and A(R_2ℓ^n), respectively. Since A(R_ℓ^2n) and A(R_2ℓ^n) are regular graphs and both commute with 1 1^⊤, they both share the same eigenvectors with 1 1^⊤. For simplicity of notation we let ℛ_1 R_ℓ^2n, ℛ_2 = R_2ℓ^n. For i ∈ [2] we have: _j(G_i) = 1/2_j(ℛ_i)    for j ∈{1,2,⋯,2nℓ-1} 1   for j ∈{2nℓ, 2nℓ+1,⋯, 4nℓ} This implies W_1(G_1,G_2) = 1/4nℓ(G_1) - (G_2)_1 = 1/2·1/4nℓ(ℛ_1) - (ℛ_2)_1 by the characterization of Wasserstein distance given in eq:w1-graph. Thus it suffices to calculate (ℛ_1) - (ℛ_2)_1. Since these are disjoint cycles, we only need to focus on the Wasserstein distance between a cycle of size 2ℓ and 2 disjoint cycles of size ℓ. Applying lem:eig_ring, we get (ℛ_1) - (ℛ_2)_1 = n· 2ℓ· W_1(R_ℓ^2,R_2ℓ) = 2n. Plugging this back we get the claimed Wasserstein distance W_1(G_1,G_2) = 1/(4ℓ). Next we show that the moments of the constructed graphs G_1 and G_2 are exponentially close. Let G_1 and G_2 be weighted graphs as constructed in def:mom. Let p_1, p_2 be the spectral density of G_1, G_2 respectively. It holds that m_j(p_i)∈[1/2,1] for all j≥ 0, i=1,2 and also m_j(p_1) - m_j(p_2) = 0 for j < ℓ and m_j(p_1) - m_j(p_2)≤ 2^-ℓ+1 for j ≥ℓ. For the first claim, we note that m_j(p_i)≥2nℓ/4nℓ· 1^j≥1/2. The upper bound of 1 follows trivially given boundedness of all eigenvalues of normalized adjacency matrices. For j≥ℓ, and i ∈ [2], we also have m_j(p_i)≤2nℓ+1/4nℓ· 1^j+2nℓ-1/4nℓ·(1/2)^j≤1/2+1/4nℓ+1/2^j+1. Thus, we can immediately conclude that m_j(p_i)∈[ 1/2, 1/2+1/2^j+1+1/4nℓ] and obtain the claimed bounds for j≥ℓ by plugging in the choice of n≥ 2^ℓ/4. For j<ℓ, we use the fact that m_j(p_i) = 1/n(A(G_i)^j). Using eq:eig_g1g2 we can calculate: m_j(p_1) - m_j(p_2) = 1/4 n ℓ·∑_i=1^2nℓ-1_i ^jR^2n_ℓ/2^j +(2 n ℓ +1) - ∑_i=1^2nℓ-1_i ^jR^n_2ℓ/2^j -(2 n ℓ +1) = 1/4 n ℓ·∑_i=1^2nℓ_i ^jR^2n_ℓ - _i^j R^n_2ℓ/2^j Since R^2n_ℓ and R^n_2ℓ are disjoint cycles, the moments of the spectral density of R^2n_ℓ and R^2n_ℓ are the same as the moments of the spectral density of R^2_ℓ and R_2ℓ. This is true because the eigenvalues of the disjoint copies of A(R^2n_ℓ) and A(R^2n_ℓ) are the same as the eigenvalues of the disjoint copies of A(R^2_ℓ) and A(R_2ℓ) with increased multiplicity, which is scaled by the size of the respective graphs. Since the first j<ℓ moments of R^2_ℓ and R_2ℓ are the same (see rmk:jlmomsame), we get from eq:mjpidiff that m_j(p_1) - m_j(p_2) = 1/4 n ℓ·∑_i=1^2nℓ_i R^2n_ℓ^j - _i R^n_2ℓ^j/2^j = 0 We briefly remark that the proof of lem:exp_mom_graph required picking a value of n that is exponentially large in ℓ to ensure that when a random walk leaves the cycle it started from, it only comes back to the same cycle with a very low probability. Otherwise, we would not have been able to show that the higher moments of G_1 and G_2 (j ≥ℓ) are close. * The proof of the first statement follows by substituting ℓ with the largest odd integer smaller than 1/(4) in lem:w1g1g2. Next, we know that for all j, m_j(p_1)∈[1/2,1]. So, by lem:exp_mom_graph, |m_j(p_1)-m_j(p_2)| ≤ 2^-ℓ+2 m_j(p_1). The statement holds since we have ℓ≥ 1/(4)-2. § LIMITS ON RANDOM WALK METHODS In this section, we prove thm:transcript_main, which can be viewed as a strengthening of thm:mom_lb. While thm:mom_lb rules out directly improving SDE algorithms like that of <cit.> based on estimating moments, thm:transcript_main shows that no method that performs less than 2^O(1/ϵ) steps of non-adaptive random walks in a graph can reliably estimate the spectral density to error ϵ in Wasserstein distance, whether or not the algorithm is based on moment estimation or not. To prove thm:transcript_main, we construct a hard pair of graphs that are ϵ far in Wasserstein distance, but difficult to distinguish based on random walks. This pair is identical to the hard instance constructed in the previous section, although without isolated nodes. These nodes were necessary to show that even accurate relative error moment estimates do not suffice for spectral density estimation. However, they are not needed for the random-walk lower bound, and eliminating them simplifies the analysis. Formally, the construction is as follows: G_1 is constructed by starting with a collection of 2n disjoint cycles, each having size ℓ for odd integer ℓ. The edges in the cycle have weight 1/4. After constructing the cycles, we connect every vertex in G_1 to all other vertices with weight 1/(4nℓ) (including a self-loop). G_2 is constructed by starting with a collection of n disjoint cycles, each having size 2ℓ. The edges in the cycle have weight 1/4 and every vertex is then connected to all other vertices with weight 1/(4nℓ) (including a self-loop). We choose n = 2· 2^2ℓ. For weighted graphs G_1, G_2 constructed in def:weighted_graphs, W_1(G_1,G_2) ≥ 1/(2ℓ). The normalized adjacency matrices of the two graphs are: A(G_1) = 1/2· A(R_ℓ^2n) + 1/4nℓ11^⊤ and A(G_2) = 1/2· A(R_2ℓ^n) + 1/4nℓ·11^⊤ As before, since A(R_ℓ^2n) and A(R_2ℓ^n) are degree-regular graphs and both commute with 1 1^⊤, we can write the sorted vector of eigenvalues (G_1),(G_2) of A(G_1),A(G_2) as _j(G_1) = 1/2_j(R_ℓ^2n)    for j∈{1,⋯,2nℓ-1} 1   for j=2nℓ and _j(G_2) = 1/2_j(R_2ℓ^n)    for j∈{1,⋯,2nℓ-1} 1   for j=2nℓ Since the top eigenvalues of R_2ℓ^n and R_ℓ^2n are the same, we conclude that W_1(G_1,G_2) = 1/2nℓ·(1/2·(R_ℓ^2n) - (R_2ℓ^n)_1). Applying lem:eig_ring, we have that (R_ℓ^2n) - (R_2ℓ^n)_1 = n· 2ℓ· W_1(R_ℓ^2,R_2ℓ) = 2n. Plugging in, we conclude that W_1(G_1,G_2) = 1/(2ℓ). We next show that the transcripts of randomly started, non-adaptive random walks generated on G_1 and G_2 have similar distributions. For m non-adaptive random walks, each with length T, a random walk transcript S is a collection of m individual walk, S = S_1,…,S_m where each S_i consists of a list of T node labels v_i,1, … v_i,T (the nodes visited in the walk). Let 𝒟_G_1 and 𝒟_G_2 denote probability distribution over random walk transcripts generated when walking in G_1 and G_2, respectively, with nodes labeled using a uniform random permutation of the integers 1, …, 2nℓ. Our main result is as follows: For m non-adaptive walks of length T, the total variation distance between 𝒟_G_1 and 𝒟_G_2 (def:dg1g2) is bounded by d_TV(𝒟_G_1,𝒟_G_2) ≤2m^2T^2/n + mT/2^ℓ. To prove lem:tv_weighted, we define a coupling between 𝒟_G_1 and 𝒟_G_2. We then show that, with high probability, the coupling outputs an identical transcript in both graphs. This establishes closeness in TV distance via the standard coupling lemma, which we state specialized to our setting below: Let 𝒟 be any distribution over pairs of random walk transcripts S^1 and S^2 such that the marginal distribution of S^1 equals 𝒟_G_1 and the marginal distribution of S^2 equals 𝒟_G_2. Then: d_TV(𝒟_G_1,𝒟_G_2) ≤ℙ_𝒟[S^1≠ S^2]. We define a coupling 𝒟 by describing a process that explicitly generates two random walk transcripts S^1 and S^2 which are distributed according to 𝒟_G_1 and 𝒟_G_1. To do so, we use a “lazy labelling” procedure that randomly labels nodes as they are visited in the random walks. To support that labeling, we define two dictionaries, L_1: V_1→ 1, …, 2nℓ and L_2: V_2→ 1, …, 2nℓ that maps the vertex sets of G_1 and G_2 (denoted as V_1 and V_2) to labels. Initially, L_i(v) returns NULL for any vertex v∈ V_i. However, if we set L_i(v)← j for a label j, then for all future calls to the dictionary, L_i(v) returns j. Additionally, in our description of the coupling we will refer to the “cycle” that a node v lies in (in G_1 or G_2) and to v's “left neighbor” and “right neighbor”. Referring to def:weighted_graphs, these terms refer to the cycle that v would be in if the lightweight copy of the complete graph had not been added to to the graph, and respectively to v's neighbors in that cycle. With this notation in place, we describe the coupling procedure below. * Choose a random permutation Π of the labels 1, …, 2nℓ. Let Π(j) denote the j^th label in the permutation. Initialize j ← 1. * For k = 1, …, m: * Choose independent, uniformly random nodes v_k,1^1 in G_1 and v_k,1^2 in G_2. If L_1(v_k,1^1) = NULL (which means that the node has never been visited before in any of our k-1 previous random walks) set L_1(v_k,1^1) ←Π(j). Likewise, if L_2(v_k,1^2) = NULL, set L_2(v_k,1^2) ←Π(j). Increment j ← j+1. * For i= 1, …, T * With probability 1/4 let v_k,i+1^1 be the right neighbor of v_k,i^1 in G_1 and let v_k,i+1^2 be the right neighbor of v_k,i^2 in G_2. With probability 1/4 let v_k,i+1^1 be the left neighbor of v_k,i^1 in G_1 and let v_k,i+1^2 be the left neighbor of v_k,i^2 in G_2. With probability 1/2, let v_k,i+1^2 and v_k,i+1^2 be uniformly random nodes in G_1 and G_2, respectively. In this last case, which we refer to as the RESET case, v_k,i+1^2 and v_k,i+1^2 can be chosen independently. * If L_1(v_k,i+1^1) = NULL, set L_1(v_k,i+1^1) ←Π(j). Likewise, if L_2(v_k,i+1^2) = NULL, set L_2(v_k,i+1^1) ←Π(j). Increment j = j+1. * Return S^1 = {{L_1(v_1,1^1),…, L_1(v_1,T^1)}, …,{L_1(v_m,1^1),…, L_1(v_m,T^1)}} S^2 = {{L_2(v_2,1^1),…, L_2(v_2,T^1)}, …,{L_2(v_m,1^2),…, L_2(v_m,T^2)}} We first observe that the above process is a coupling, as it returns S^1 sampled from 𝒟_G_1 and S^2 sampled from 𝒟_G_2. So, we are left to argue that, with high probability, S^1 = S^2. To do so, we use the fact that the transcripts are identical if two events hold. To define these events, note that each walk in each transcript begins in a cycle in G_1 or G_2, and then takes a random number of steps left and right in that cycle until “resetting” with probability 1/2 to a uniformly random node in the graph (which could bring the walk to a new cycle, the same cycle it is currently in, or a cycle visited previously). For transcript S^1, let R_1^1, …, R_q^1 denote the list of cycles visited between each RESET step across all m walks in that transcript. Likewise, let R_1^2, …, R_q^2 denote the set of cycles visited in S^2. S^1 and S^2 are always identical if the following events occur: Event 1: For all j ≠ k, R_j^1 ≠ R_k^1 and R_j^2 ≠ R_k^2. Event 2: For all j∈ 1, …, s, we take fewer than ℓ left/right steps in R_j^1 and R_j^2 before a RESET. To see why this is the case, note that, if Event 1 occurs, the only way that S^1 and S^2 would differ is if, while random walking in R_j^1 and R_j^2, we move to nodes v^1 and v^2 where L_1(v^1) is defined but L_2(v^2) is NULL, or vice-versa. However, the only way this can happen is if we complete an entire loop around R_j^1, and thus return to a node that was previously labeled. Since each R_j^1 has ℓ nodes, such a loop cannot be completed if we always take < ℓ steps before resetting to a new cycle. We proceed to show that Event 1 and Event 2 both occur with high probability. First, consider Event 1. We take at most mT RESET steps across all m random walks. At each step, the probability we return to a cycle we had previously visited is at most mT/2n for the walk in G_1 and at most mT/n for the walk in G_2. So, by a union bound, we do not return to any previously visited cycle after a RESET in either walk with probability: Event 1≥ 1 - mT·mT/2n - mT·mT/n≥ 1 - 2m^2T^2/n. Next, consider Event 2. Note that, since we take a left/right step with probability 1/2 (and RESET with probability 1/2) the chance that we take ℓ steps or more in a given cycle is equal to (1/2)^ℓ. We visit at most q = mT cycles, so by a union bound, we take less then ℓ left/right steps in each cycle with probability: Event 2≥ 1 - mT/2^ℓ. Combining (<ref>) and (<ref>) with a union bound, we conclude that both events hold, and thus S^1 = S^2 with probability at least 1 - 2m^2T^2/n - mT/2^ℓ. Combined with fact:coupling_lemma, this proves the lemma. With lem:tv_weighted in place, we can prove our main lower bound result for non-adaptive random walks: * The theorem follows from lem:w1g1g2_rw and lem:tv_weighted. In particular, choose ℓ to equal the largest odd integer smaller than 1/4ϵ and choose n = 2· 2^2ℓ. Then consider graphs G_1 and G_2 generated as in def:weighted_graphs with random node labels. By lem:w1g1g2_rw, W_1(G_1,G_2)> 2. So, there is no distribution p that is ϵ-close in Wasserstein distance to both the spectral density of G_1 and G_2. Accordingly, any algorithm that estimates the SDE of a graph to error ϵ with probability 3/4 can be used to correctly distinguish samples from 𝒟_G_1 and 𝒟_G_2 with probability 3/4. However, with n set as above, we have that d_TV(𝒟_G_1,𝒟_G_2) ≤2m^2T^2/n + mT/2^ℓ = m^2T^2/2^2ℓ + mT/2^ℓ. And thus we can check that d_TV(𝒟_G_1,𝒟_G_2) ≤ 1/2 whenever mT ≤1/42^1/4ϵ -2. As is standard, no algorithm can distinguish between samples from two distributions with TV distance δ with probability greater than 1/2 + 1/2δ, which establishes the result: any method that correctly distinguishes 𝒟_G_1 and 𝒟_G_2 with probability >3/4 must use mT > 1/42^1/4ϵ -2 random walk steps. § ACKNOWLEDGEMENTS We would like to thank Aditya Krishnan for early discussions on the questions addressed in this paper. Aaron Sidford was supported by a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research award, and a Sloan Research Fellowship. This work was also supported by NSF Award CCF-2045590. § LOWER BOUND FOR THE ADAPTIVE RANDOM WALK MODEL In this section, we consider lower bounds against a possibly richer class of spectral density estimation algorithms that can access graphs via adaptive random walks. Specifically, the algorithm is allowed to start random walks (of any length) at any node of its choosing and can store the entire transcript of these walks. In the adaptive model, the algorithm also has the ability to uniformly sample nodes from the graph, as in the non-adaptive random walk model considered for thm:transcript_main. Interestingly, an adaptive algorithm can solve the hard instance from thm:transcript_main using roughly O(log(1/ϵ)/ϵ) random walk steps. Specifically, for any node, the algorithm can identify its adjacent cycle nodes with high probability by taking a logarithmic number of 1-step random walks and identifying the two nodes that are visited most frequently. This allows it to walk one way around the cycle, check its length, and thus distinguish between G_1 and G_2. Proving a lower bound in the adaptive random walk setting appears to be much harder than the non-adaptive setting, and we do not have any proposed constructions that we conjecture could establish that 2^O(1/ϵ) random walk steps are necessary. However, in this section we give a simple argument for a lower bound of Ω(1/ϵ^2) steps. The lower bound is via a reduction to a natural sampling problem, introduced below. For a parameter α∈ (1/2,1) and integer n, suppose we have a jar that contains either α· n red marbles and (1-α)· n blue marbles (Case 1) or contains (1-α)· n red marbles and α· n blue marbles (Case 2). Our goal is to determine if we are in Case 1 or 2 given a sample of s marbles drawn without replacement from the jar. Let ϵ∈ (0,1/2), let α = (1+ϵ)/2, and let n = 2/ϵ^4. There is no algorithm that solves prob:marbles with probability > 3/4 unless s > 1/(4ϵ^2). Suppose we draw s marbles from the jar and encode the result in a length s vector (e.g., with a 0 at position i if the i^th marble drawn is red, and a 1 if it is blue). Let X_1^(s) denote the distribution over vectors observed in Case 1, and let X_2^(s) denote the distribution for Case 2. We will show that d_TV(X_1^(s), X_2^(s)) is small. To do so, we introduce two auxiliary distributions: let X̂_1^(s) denote the distribution over vectors observed if we are in Case 1 and draw marbles randomly with replacement and let X̂_2^(s) denote the distribution if we are in Case 2 and draw marbles with replacement. We first show that d_TV(X_i^(s), X̂_i^(s)) is small for i ∈{1,2} when n is large. To do so, let be the event that in s independent draws with replacement, we never pick a previously picked marble. Let [X̂_i^(s)]_ denote the distribution X̂_i^(s) conditioned on , and note that [X̂_i^(s)]_ = X_i^(s). The probability that happens is equal to 1· (1-1/n)·(1-2/n) · (1-s/n) ≥ 1 - s^2/n. Therefore, we conclude that: d_TV(X̂_i^(m),X_i^(m)) ≤ s^2/n. Next, we show that d_TV(X̂_1^(s), X̂_2^(s)) is small. Doing so is equivalent to bounding the total variation distance between s independent draws from a Bernoulli distribution with mean 1-α and s independent draws from Bernoulli distribution with mean α. Let D_KL(p,q) denote the Kullback–Leibler divergence between distributions p and q. Applying Pinsker's inequality, we have: d_TV(X̂_1^(s),X̂_2^(s) ) ≤√(1/2)√(D_KL(X̂_1^(s),X̂_2^(s)))= √(s/2)√(D_KL((1-α),(α))) = √(s/2)√(αlog(α/(1-α))+(1-α)log((1-α)/α)) ≤√(s/2)·ϵ. The last inequality holds for any α equal to (1+ϵ)/2 whenever ϵ≤ 1/2. Applying triangle inequality to combine (<ref>) and (<ref>), we have that: d_TV(X_1^(s),X_2^(s)) ≤d_TV(X̂_1^(s),X_1^(s)) + d_TV(X̂_2^(s),X_2^(s)) + d_TV(X̂_1^(s),X̂_2^(s)) ≤2s^2/n + √(s/2)·ϵ. For any s ≤ 1/(4ϵ^2) and n = 2/ϵ^4 we conclude that d_TV(X_1^(m),X_2^(m)) < 1/2. Accordingly, no algorithm can distinguish between X_1^(s) and X_2^(s) with probability ≥ 3/4 unless s > 1/(4ϵ^2). With lem:marbles in place, we are now ready to prove our lower bound for spectral density estimation. To do so, we will show that any adaptive random walk algorithm that can estimate the spectral density of a graph to accuracy ϵ using s total random walk steps can solve the prob:marbles using ≤ s samples. This reduction requires introducing a second pair of “hard graphs” that are close in Wasserstein distance. In comparison to the hard instance in thm:transcript_main, these graphs are also based on collection of cycles. The main difference is that we consider two graphs that each contain a mixture of cycles of length 2ℓ and ℓ, but in different proportions. For odd integer ℓ and parameter α∈(0.5,1), let G_1 be a collection of α n disjoint cycles of length 2ℓ and 2(1-α)n cycles of size ℓ. Similarly, let G_2 be a collection of (1-α)n cycles of length 2ℓ and 2α n cycles of size ℓ. Both graphs have 2nℓ vertices in total. We use the following expression for the Wasserstein distance between the spectra of the two graphs. Let G_1 and G_2 be unweighted graphs as in def:adaptive_g1g2. W_1(G_1,G_2) = (2α-1)/ℓ. We can compute the exact eigenvalues of the two graphs by combining lem:eig_ring with the fact that eigenvalues just increase in multiplicity with repeated components. As in that lemma, recall we use R_ℓ to denote a cycle of length ℓ and R_2ℓ to denote a cycle of length 2ℓ. The Wasserstein distance between R_ℓ^2 and R_2ℓ is W_1(R_ℓ^2,R_2ℓ) = 1/ℓ. Note that G_1 and G_2 both have (1-α)n cycles of length 2ℓ and 2(1-α) n cycles of length ℓ, while G_1 has (2α-1)n extra R_2ℓ cycles and G_2 has (2α-1)n extra copies of R_ℓ^2. Let p_1(x) and p_2(x) be the spectral density of G_1 and G_2 respectively, and let p̃_1(x) and p̃_2(x) be the spectral density of R_2ℓ^(2α-1)n and R_ℓ^2(2α-1)n, respectively. We have that 2nℓ· (p_1(x)-p_2(x)) = (2(2α-1)nℓ·(p̃_1(x) - p̃_2(x)) for all x∈[-1,1]. Thus, due to the dual characterization of Wasserstein distance in (<ref>), W_1(G_1,G_2) = (2α -1)· W_1(R_2ℓ^(2α-1)n,R_ℓ^2(2α-1)n) = (2α -1)· W_1(R_2ℓ,R_ℓ^2)= (2α-1)/ℓ. We now have all the ingredients in place to prove the main result of this section: For any ϵ < 1/6, no algorithm that takes s adaptive random walks steps in a given graph G can approximate G's spectral density to ϵ accuracy in the Wasserstein-1 distance with probability > 3/4, unless s ≥ 1/(36ϵ^2). Suppose we had such an algorithm (call it 𝒜) that uses s <1/(4ϵ^2) random walk steps to output an ϵ/3-accurate spectral density with probability greater than 3/4. We will show that the algorithm could be used to solve prob:marbles using <1/(4ϵ^2) samples from the jar with probability greater than 3/4, which is impossible by lem:marbles. To prove this reduction we associated an instance of prob:marbles with a hidden graph G that is either isomorphic to G_1 or G_2 as defined in def:adaptive_g1g2. To make the association, every marble will correspond to 2ℓ vertices in the graph with some fixed set of known labels. However, the connections between those nodes is hidden. In particular, if the marble is red, the 2ℓ vertices are arranged in a single cycle of length 2ℓ. Otherwise, they are arranged in two cycles of length ℓ. The ordering of nodes in both cases is known in advance, but we do not know which of the two cases we are in. Also note that there are no other connections between vertices. Observe that if we are in Case 1 for prob:marbles, G is isomorphic to G_1 and if we are in Case 2, G is isomorphic to G_2. So in particular, G's spectral density is either equal to the spectral density of G_1 or G_2. Our main claim is that we can run algorithm 𝒜 on the hidden graph G while only accessing s marbles from the jar. To so do, every time the algorithm requests to visit a specific node, we draw the marble from the jar associated with that node's label. In doing so, we learned all edges in the ring containing that node (as well as other edges), so we can perform any future random walk steps initiated from that node. Since 𝒜 takes s steps, we at most need to draw s marbles over the course of running the algorithm. At the same time, note that when we choose ℓ = 1 (considering self loops) and α = (1+ϵ)/2 as in lem:marbles, lem:dist_mix implies that the Wasserstein distance between G_1 and G_2 is equal to ϵ. So, if 𝒜 returns an ϵ/3-accurate spectral density with probability 3/4, we can determine if we are in Case 1 or Case 2 with probability 3/4, violating lem:marbles. We conclude that no such algorithm can exist. The final statement of the theorem follows by adjusting constants on ϵ. § WASSERSTEIN DISTANCE BOUNDS VIA CHEBYSHEV POLYNOMIALS In this section, we give an alternative proof of a lower-bound by <cit.>, which shows that there exist distributions whose first ℓ-1 moments match exactly, but the Wasserstein distance between the distributions is greater than 1/(2(ℓ+1)). Our analysis tightens their result by a factor of ∼ 4, showing two such distributions with Wasserstein distance 2/ℓ. Moreover, we prove that the Wasserstein distance is Ω(ℓ^-1) for any distributions p, q whose first ℓ-1 moments are the same and whose ℓ-th moments differ by Ω(2^-ℓ). For any odd ℓ, there exists a pair of distributions p, q, each consisting of (ℓ+1)/2 point masses, supported within the unit interval [-1,1] such that p and q have identical first ℓ-1 moments, and the Wasserstein distance W_1(p,q) ≥ 2/ℓ. Recall that we use R_ℓ^2 to denote 2 disjoint cycles of length ℓ, and use R_2ℓ to denote a cycle of length 2ℓ, where ℓ is an odd number. We know the spectrum of R_ℓ^2 and R_2ℓ from lem:eig_ring. Let p' and q' denote the spectral density of A(R_ℓ^2) and A(R_2ℓ). We first note that the first ℓ-1 moments of the spectral density of p' and q' are the same because a random walk of length ℓ-1 cannot distinguish R_ℓ^2 from R_2ℓ (see rmk:jlmomsame). Also, recall we use (R_ℓ^2) to denote the sorted eigenvalue list of A(R_ℓ^2) and (R_2ℓ) to denote the sorted eigenvalue list of A(R_2ℓ). We make the following observations about the spectrum of A(R_ℓ^2) and A(R_2ℓ) based on lem:eig_ring. * A(R_ℓ^2) has (ℓ+1)/2 unique eigenvalues, and A(R_2ℓ) has ℓ+1 unique eigenvalues. * All eigenvalues of A(R_ℓ^2) overlap with eigenvalues of A(R_2ℓ). In particular, all the eigenvalues of A(R_ℓ^2) occur two times more in frequency than the corresponding eigenvalues in A(R_2ℓ). Formally, ∀λ∈(R_ℓ^2), j  : λ_j ∈(R_ℓ^2), λ_j = λ, j ∈ [2ℓ] = 2 · j  : λ_j ∈(R_2ℓ), λ_j = λ, j ∈ [2ℓ] * All the eigenvalues of A(R_2ℓ) lies in [-1,1]. Let ^(2) denote the sorted list of eigenvalues where we remove all the eigenvalues from (R_2ℓ) that occurs in (R_ℓ^2). Let ^(1) be the set of removed eigenvalues. The following observations follow from eq:eig_same. The size of ^(2), and ^(1) is ℓ. Moreover ^(1) has the same eigenvalues as (R_ℓ^2) where the frequency of each unique eigenvalue is (R_ℓ^2) is reduced by a factor of 2. Consequently, we define p(x) = 1/ℓ∑_j ∈ [ℓ]δx - _j^(1) = p'(x), and q(x) = 1/ℓ∑_j ∈ [ℓ]δx - _j^(2) = 2q'(x) - p'(x). This ensures that p, q are valid distributions and have a support size of (ℓ+1)/2. Since p', and q' have the same first ℓ-1 moments, we have p and q also have the same first ℓ-1 moments. Moreover, W_1(p,q) = W_1(2q' - p',p') = 2 W_1(q',p') = 2/ℓ, where the penultimate equality follows from the dual characterization of Wasserstein distance in (<ref>) and the last equality follows from lem:eig_ring. We complement prop:kvprop with the following lem:leg_w1, which shows that for two distributions p and q such that all their first ℓ-1 moments are the same and the ℓ-th moment differ only by Ω(2^-ℓ), even then the Wasserstein distance between p, q is large. The proof follows just by using the fact that there are 1-Lipschitz polynomials with large leading coefficient. We note the following standard facts about the Chebyshev polynomials which can, for example, be found in <cit.>. The Chebyshev polynomials of the first kind of degree i, (i∈), denoted by T_i(x), satisfy the following properties: * ∀ i ∈, ∀ x ∈ [-1,1], T_i(x)≤ 1. * The leading coefficient of T_i is 2^i-1. Consider two distributions p and q supported on [-1,1] such that the difference of their first ℓ-1 moments are 0 and the difference of their ℓ-th moment is c · 2^-ℓ. Then, for such a distribution, their Wasserstein distance is bounded by W_1(p,q) ≥c/4 ℓ We use the dual characterization of the Wasserstein distance in def:w1_l1-dual and consequently, it suffices to exhibit a 1-Lipschitz function g which has a high inner-product with p-q. Let T_ℓ-1 be a degree ℓ-1 Chebyshev polynomial. From fact:cheb we know that f_ℓ(x) = ∫ T_ℓ-1(x) x is a degree ℓ, 1-Lipschitz polynomial in [-1,1], with leading coefficient 2^ℓ-2/ℓ. Define g_ℓ(x) as follows: g_ℓ(x) f_ℓ(-1), for x ∈ (-∞, -1) f_ℓ(x), for x ∈ [-1,1] f_ℓ(1), for x ∈ (1, ∞) From properties of f_ℓ(x) and by construction, we know that g_ℓ(x) is a 1-Lipschitz function. Therefore, W_1(p,q) ≥∫_ g_ℓ(x) (p(x) - q(x)) x = ∫_-1^1 f_ℓ(x) (p(x) - q(x)) x = ∫_-1^12^ℓ-2/ℓ x^ℓ (p(x) - q(x)) x = 1/4 ℓ  2^ℓ· c 2^-ℓ = c·1/4 ℓ where the first equality holds because p(x)-q(x) = 0 outside [-1,1] and the second equality follows from the fact that the difference of the first 1,…,ℓ-1 moments are 0. § ANOTHER SPECTRAL METRIC FOR GRAPH COMPARISON Throughout this section we consider two graphs G_1, G_2 with the same vertex size n and same vertex labeling V = [n], and their un-normalized adjacency matrix Ã_1 and Ã_2 with a common degree matrix D. Here we consider learning the spectrum of their difference matrix, i.e., A(G_1)-A(G_2) = D^-1/2Ã_1D^-1/2-D^-1/2Ã_2D^-1/2, or equivalently D^-1(Ã_1-Ã_2). We provide a simple proof that exp(O(1/)) number of samples also suffice to estimate this distribution up to -Wasserstein distance, using similar techniques as in <cit.>. We first restate the main theorem in <cit.> for completeness. Given two distributions with respective density functions p,q supported on [a,b] whose first k moments are α = (α_1,⋯, α_k) and β = (β_1,⋯, β_k), respectively. The Wasserstein distance W_1(p,q) between p,q is bounded by W_1(p,q)≤ C(b-a/k+3^k(b-a)α-β_2) for some absolute constant C. We define a variant of the non-adaptive random walk access model, represented via an oracle (G_1,G_2,j,{x_i}_i∈[j]), specifically for this problem, which outputs the random trajectory after taking a length j random walk starting from a uniformly randomly chosen vertex, where at step i∈[j] it the follows probabilistic transition of D^-1Ã_1 when x_i=1 and D^-1Ã_2 when x_i = 0. We consider alg:spectrum-comp for estimating the spectral density of matrix D^-1(Ã_1-Ã_2). alg:spectrum-comp computes estimates of the moments of difference matrix D^-1(Ã_1-Ã_2). Together with the procedure of computing a distribution based on first k moments using linear programming as stated in <cit.>, we have the following guarantee. Given any two graphs G_1, G_2 on same set of vertices with a common degree matrix D, alg:spectrum-comp with k = 4C/ and θ = /(3^2k+2) outputs a distribution p that is -close in Wasserstein-1 distance with the spectral density function of A(G_1)-A(G_2) with probability 0.9, using a total of 2^O(1/) calls to (G_1,G_2,j,·), j∈[O(1/)]. Note similarity transformation doesn't affect eigenvalues, thus it suffices to estimate the spectral density function of matrix D^-1(Ã_1-Ã_2), whose j^th moment is 1/ntr((D^-1Ã_1-D^-1Ã_2)^j). For any j∈ℤ_+, note that 1/ntr((D^-1Ã_1-D^-1Ã_2)^j)= ∑_ x_1,x_2,⋯, x_j∈{0,1}1/ntr(∏_i=1,⋯, j(x_i· D^-1Ã_1+(1-x_i)· D^-1Ã_2)). Given any x = (x_1,⋯, x_j), we run an alternating random walk as in  to generate unbiased samples of term 1/ntr(∏_i=1,⋯, j(x_i· D^-1Ã_1+(1-x_i)· D^-1Ã_2)) (as in Line <ref>). By concentration we can estimate each term 1/ntr(∏_i=1,⋯, j(x_i· D^-1Ã_1+(1-x_i)· D^-1Ã_2)) using p̂_j,x (as in Line <ref>) up to θ/2^j additive accuracy with high probability 1-δ/(k2^j) using a total of 1/2θ^-2 j 4^jlog(2k/δ) calls to (G_1,G_2,j,{x_i}_i∈[j]). Consequently, using a union bound we have with probability 1-δ, p̂_j estimates the j^th moments up to θ additive accuracy, each using a total of O(θ^-2 j 2^3jlog(2k/δ)) calls to some (G_1,G_2,j,·) for all j∈[k]. Picking k = 4C/, θ =/3^2k+2, we can apply thm:cite-main to conclude that the constructed distribution p is an -approximation in Wasserstein distance to the spectral density function of A(G_1)-A(G_2). Also, the algorithm uses a total of ∑_j∈[k]O(θ^-2 j 2^3jlog(2k/δ)) calls to (G_1,G_2,j,·) = ∑_j∈[O(1/)]2^O(1/) calls to (G_1,G_2,j,·). In the above equality we also used that δ = 0.1. An interesting open problem is whether similar algorithms exist for comparing two graphs on the same vertex set without a common degree matrix D.
http://arxiv.org/abs/2307.02015v1
20230705040623
Joint Recovery of T1, T2* and Proton Density Maps Using a Bayesian Approach with Parameter Estimation and Complementary Undersampling Patterns
[ "Shuai Huang", "James J. Lah", "Jason W. Allen", "Deqiang Qiu" ]
eess.IV
[ "eess.IV" ]
1]Shuai Huang 2]James J. Lah 1,2]Jason W. Allen 1]Deqiang QiuThis work is supported by National Institutes of Health under Grants R21AG064405, R01AG072603, R01AG070937 and P30AG066511. Corresponding author: Deqiang Qiu ([email protected]). [1]Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, 30322, USA [2]Department of Neurology, Emory University, Atlanta, GA, 30322, USA Joint Recovery of T1, T2* and Proton Density Maps Using a Bayesian Approach with Parameter Estimation and Complementary Undersampling Patterns [ ============================================================================================================================================== Word Count for the body of the text: 4413. Purpose: To improve the quality of quantitative MR images recovered from undersampled measurements, we incorporate the signal model of the variable-flip-angle (VFA) multi-echo 3D gradient-echo (GRE) method into the reconstruction of T_1, T_2^* and proton density (PD) maps. Additionally, we investigate the use of complementary undersampling patterns to determine optimal undersampling schemes for quantitative MRI. Theory: We propose a probabilistic Bayesian formulation of the recovery problem. Our proposed approach, approximate message passing with built-in parameter estimation (AMP-PE), enables the joint recovery of distribution parameters, VFA multi-echo images, and T_1, T_2^*, and PD maps without the need for hyperparameter tuning. Methods: We conducted both retrospective and prospective undersampling to obtain Fourier measurements using variable-density and Poisson-disk patterns. We investigated a variety of undersampling schemes, adopting complementary patterns across different flip angles and/or echo times. Results: AMP-PE adopts a joint recovery strategy, it outperforms the state-of-the-art l1-norm minimization approach that follows a decoupled recovery strategy. For T_1 mapping, employing fixed sampling patterns across different echo times produced the best performance. Whereas for T_2^* and proton density mappings, using complementary sampling patterns across different flip angles yielded the best performance. Conclusion: AMP-PE achieves better performance by combining information from both the MR signal model and the sparse prior on VFA multi-echo images. It is equipped with automatic and adaptive parameter estimation, and works naturally with the clinical prospective undersampling scheme. Keywords: Approximate message passing, Compressive sensing, Complementary undersampling pattern, Quantitative MRI, Parameter estimation § INTRODUCTION Quantitative MRI (qMRI) techniques are used to measure important tissue parameters, including the T_1, T_2, and T_2^* relaxation times and proton density. These quantitative maps have proven valuable in detecting subtle changes in tissue properties, and have gained significant traction as biomarkers for investigating age-related neurodegenerative diseases <cit.>. Nevertheless, acquiring a fully-sampled dataset in the k-space for high-resolution 3D volumetric scans can be time-consuming, leading to patient discomfort and the potential introduction of motion artifacts in reconstructed images. To mitigate this, undersampling techniques are commonly employed to reduce scan time, albeit at the cost of decreased image quality. Consequently, it becomes crucial to incorporate additional prior information into the image reconstruction process to enhance the overall image quality. Natural images are widely acknowledged to have sparse representations. Specifically, most of the image wavelet coefficients are close to zero, with the energy concentrated in only a few significant entries. Compressive sensing (CS) leverages this sparsity property and encourages sparse solutions in a suitable basis, such as the wavelet basis <cit.>. To enforce the sparse prior on image wavelet coefficients, various methods, including regularization and Bayesian approaches, have been proposed <cit.>. Variable-flip-angle (VFA) 3D gradient-echo (GRE) has emerged as a popular technique for quantitative MR imaging <cit.>. In conventional qMRI methods, undersampled k-space data is first used to recover images acquired at multiple flip angles and echo times through the application of CS. Subsequently, the recovered VFA multi-echo images are fitted to the MR signal model to obtain tissue parameters. However, this decoupled process treats the recoveries of VFA multi-echo images and quantitative maps as independent tasks, which may limit the overall image quality. To address this limitation and further enhance the image quality, we propose a probabilistic Bayesian formulation that incorporates both the MR signal model and the sparse prior on VFA multi-echo images. Our approach enables joint recovery of T_1, T_2^*, and proton density maps by leveraging the proposed approximate message passing (AMP) framework for qMRI. AMP has gained wide recognition and utilization in sparse signal recovery due to its efficiency and state-of-the-art performance <cit.>. However, it was originally developed for linear systems <cit.>. In this study, we propose an extension to the AMP framework specifically tailored for the nonlinear recovery of tissue parameters. In contrast to regularization approaches that necessitate parameter tuning, AMP enables the joint recovery of the signal and parameters in an automatic and adaptive manner. This characteristic renders it an ideal choice for clinical settings involving different scanners and acquisition protocols. We have previously applied this framework to the recovery of T_2^* and phase images <cit.>. In this paper, we further extend our approach to jointly recover T_1, T_2^*, and proton density maps while additionally comparing various undersampling strategies. Popular options for undersampling patterns include the variable-density pattern <cit.>, Poisson-disk pattern <cit.>, among others. To achieve maximum coverage of the k-space, complementary patterns can be adopted at various flip angles and echo times <cit.>. While previous studies have commonly employed complementary patterns at all flip angles and echo times to enhance multi-contrast MRI, our findings demonstrate that this approach does not hold for quantitative MRI. Specifically, for T_1 mapping, it is advantageous to use fixed sampling patterns across different echo times. Conversely, for T_2^* and proton density mappings, employing complementary sampling patterns across different flip angles yields better results. § THEORY Using the gradient echo (GRE) sequence, we acquire undersampled Fourier measurements _ij∈ℂ^M of VFA multi-echo images _ij∈ℂ^N at the i-th flip angle θ_i and j-th echo time t_j: _ij=_ij_ij + _ij, where i∈{1,⋯,I}, j∈{1,⋯,J}, _ij is the measurement matrix, and _ij is the measurement noise. The collection of measurements across all flip angles and echo times is denoted by ∈ℂ^MIJ. The wavelet coefficients _ij of the image _ij are mostly close to zero, i.e. approximately sparse, in the wavelet domain: _ij = _ij , where is the invertible wavelet transform matrix. The sparse prior on _ij is widely used to enhance the quality of reconstructed images <cit.>. Assuming a logitudinal steady-state can be reached with perfect spoiling, the magnitude of the complex MR signal z_ij from a spoiled GRE sequence can be expressed as <cit.> |z_ij| = f_ij(z_0,t_1,t_2^*) = z_0·sinθ_i·1-exp(-TR/t_1)/1-cosθ_i·exp(-TR/t_1)·exp(-t_j/t_2^*) , where z_0 is the proton density, t_1 is the T_1 relaxation time, t_2^* is the T_2^* relaxation time, TR is the repetition time. Utilizing the sparse prior on image wavelet coefficients _ij, we first calculate the posterior distribution p_s(_ij|) of the VFA multi-echo image _ij from a probabilistic perspective. Subsequently, we consider this distribution, p_s(_ij|), as the prior for the VFA multi-echo image _ij and integrate it with the signal model prior for a joint reconstruction of the T_1, T_2^*, and proton density maps. §.§ VFA Multi-echo Image Prior We first model the distribution of wavelet coefficients _ij using the Laplace distribution. These wavelet coefficients are assumed to be independent and identically distributed (i.i.d.): p(v | λ) = 1/2λ·exp(-λ|v|) , where λ>0 is the unknown distribution parameter. We then model the noise distribution using the complex additive white-Gaussian distribution: p(w | τ_w) = 1/πτ_wexp(-|w|^2/τ_w) , where τ_w is the noise variance. Within the AMP framework, the distribution parameters λ_ij,τ_w are treated as unknown variables <cit.>. The factor graph for the forward model (<ref>) is illustrated in Fig. <ref>, where variable nodes are denoted by “◯” and contain random variables. The factor nodes, represented by “▪”, encode the probability distributions of these random variables. In particular, the factor nodes Ω_ijn and Φ_ijm correspond to the signal and noise priors, respectively. Ω_ijn(v_ijn,λ_ij) = p(v_ijn | λ_ij) Φ_ijm(y_ijm,_ij,τ_w) = p(y_ijm-_ijm_ij | τ_w) , where _ijm is the m-th row of the measurement matrix _ij=_ij^-1. Messages about the variable distributions are passed and discussed among the factor nodes until a consensus is reached. As an example, we use the following notations to denote the messages passed between the n-th variable node v_ijn and the m-th factor node Φ_ijm (at the i-th flip angle and j-th echo): * Δ_v_ijn→Φ_ijm denotes the message from v_ijn to Φ_ijm, * Δ_Φ_ijm→ v_ijm denotes the message from Φ_ijm to v_ijm, where n∈{1,⋯,N} and m∈{1,⋯,M}. Both Δ_v_ijn→Φ_ijm and Δ_Φ_ijm→ v_ijn are functions of the variable v_ijn, and they are expressed in the “log” domain in this paper. The derivation of the AMP algorithm falls beyond the scope of this paper, and algorithmic details can be found in <cit.>. For readers' convenience, a detailed introduction to AMP is provided in Section S-I of the Supporting Information. Drawing upon the graphical model theory <cit.>, the posterior distribution of a variable is proportional to the exponential function of the sum of messages passed to that variable: p(λ_ij|) ∝exp(∑_nΔ_Ψ_ijn→λ_ij) p(τ_w|) ∝exp(∑_ijmΔ_Φ_ijm→τ_w) p(v_ijn|) ∝exp(Δ_Ψ_ijn→ v_ijn+∑_mΔ_Φ_ijm→ v_ijn) . We can then estimate the distribution parameters {λ_ij,τ_w} using their maximum-a-posteriori (MAP) estimations λ_ij = max_λ_ij p(λ_ij|) τ_w = max_τ_w p(τ_w|) . To achieve accurate parameter estimation, it is essential to compute the distributions p(λ_ij|) and p(τ_w|) exactly. However, in the AMP framework, the distribution p(v_ijn|) can be “approximated” by a Gaussian distribution to simplify the calculations without sacrificing accuracy <cit.>: p(v_ijn|) ≈𝒞𝒩(v_ijn | μ_(v_ijn),κ_(v_ijn).) , where 𝒞𝒩(·) is the complex Gaussian density function, μ_(v_ijn) and κ_(v_ijn) are the corresponding mean and variance of the wavelet coefficient v_ijn. Since the wavelet transform = is invertible, we can compute the posterior distribution of the VFA multi-echo image _ij from that of the wavelet coefficients _ij in (<ref>) straightforwardly: p_s(z_ijn|)≈𝒞𝒩(z_ijn | μ_s(z_ijn),κ_s(z_ijn).) , where μ_s(z_ijn) and κ_s(z_ijn) are the corresponding mean and variance of the n-th image voxel z_ijn. The distribution p_s(z_ijn|) serves as the VFA multi-echo image prior and is combined with the signal model prior in our proposed nonlinear AMP framework. §.§ Proposed Nonlinear AMP framework The factor graph of the proposed nonlinear AMP framework for reconstructing the tissue parameters is shown in Fig. <ref>. We shall introduce a new variable x_ijn to represent the MR signal magnitude. It is connected to the complex MR signal z_ijn through the factor node Γ_ijn: Γ_ijn(x_ijn,z_ijn) = δ(x_ijn-|z_ijn|) , where δ(·) is the Dirac impulse. The VFA multi-echo image prior and the signal model prior are encoded in the factor nodes Ξ_ijn and Ψ_ijn respectively: Ξ_ijn(z_ijn) = p_s(z_ijn|) Ψ_ijn(x_ijn,z_0(n),t_1(n),t_2^*(n)) = δ(x_ijn-f_ij(z_0(n),t_1(n),t_2^*(n))) , where f_ij(·) is the signal model in (<ref>). To simplify the discussion, we narrow our focus to the message passing steps between {z_ijn} and the tissue parameters 𝒯_n={z_0(n),t_1(n),t_2^*(n)} on the factor graph depicted in Fig. <ref>. In this context, we integrate the VFA multi-echo image prior with the signal model prior. The message passing proceeds sequentially through the variable and factor nodes connecting {z_ijn} and 𝒯_n. We then have: * Message passing from {z_ijn} to 𝒯_n. Since the distributions of {_ij, _ij} are approximated by Gaussian distributions in AMP <cit.>, the message Δ_z_ijn→Γ_ijn has the following expression Δ_z_ijn→Γ_ijn = ∑_mΔ_Φ_ijm→ z_ijn+logΞ_ijn(z_ijn) = -1/πτ_1(z_ijn)|z_ijn-μ_1(z_ijn)|^2 + C , where μ_1(z_ijn), τ_1(z_ijn) are the corresponding mean and variance, C is a normalizing constant. The message from Γ_ijn to x_ijn is Δ_Γ_ijn→ x_ijn = log∫Γ_ijn(x_ijn,z_ijn)·exp(Δ_z_ijn→Γ_ijn) dz_ijn ≈ -1/2πτ_1(z_ijn)(x_ijn-|μ_1(z_ijn)|)^2 + C . The message from x_ijn to Ψ_ijn is Δ_x_ijn→Ψ_ijn = Δ_Γ_ijn→ x_ijn . The message from Ψ_ijn to the tissue parameters 𝒯_n={z_0(n),t_1(n),t_2^*(n)} is Δ_Ψ_ijn→𝒯_n =log∫Ψ_ijn(x_ijn,z_0(n),t_1(n),t_2^*(n))·exp(Δ_x_ijn→Ψ_ijn) dz_ijn =-1/2πτ_1(z_ijn)(f_ij(z_0(n),t_1(n),t_2^*(n))-|μ_1(z_ijn)|)^2 . Combining the messages from all the factor nodes {Ψ_ijn} connected to 𝒯_n, we can calculate the posterior distribution of the tissue parameters 𝒯_n as follows: p(𝒯_n|) ∝exp(∑_ijΔ_Ψ_ijn→𝒯_n) To enhance stability, AMP typically enforces the variances {τ_1(z_ijn)} to be the same across all the entries in _ij. The MAP estimations of the tissue parameters are then 𝒯_n = max_𝒯_n p(𝒯_n|) =min_𝒯_n ∑_ij(f_ij(z_0(n),t_1(n),t_2^*(n))-|μ_1(z_ijn)|)^2 . The above (<ref>) is a nonlinear least-squares fitting problem, it can be decomposed into three one-dimensional problems with respect to z_0(n), t_1(n), t_2^*(n). The optimization with respect to z_0(n) is convex, and can be solved easily. While the optimizations involving t_1(n) and t_2^*(n) are nonconvex, they can still be efficiently solved using a dictionary-based exhaustive search approach, once the search intervals are properly defined. * Message passing from 𝒯_n to {z_ijn}. We can perform the message passing from 𝒯_n to {z_ijn} in a similar fashion. Specifically, the message from Ψ_ijn to x_ijn is Δ_Ψ_ijn→ x_ijn = logΨ(x_ijn,z_0(n),t_1(n),t_2^*(n)) =logδ(x_ijn-f_ij(z_0(n),t_1(n),t_2^*(n))) . The message from x_ijn to Γ_ijn is Δ_x_ijn→Γ_ijn = Δ_Ψ_ijn→ x_ijn . The message from Γ_ijn to z_ijn is Δ_Γ_ijn→ z_ijn = log∫Γ_ijn(x_ijn,z_ijn)·exp(Δ_x_ijn→Γ_ijn) dx_ijn = logδ(|z_ijn|-f_ij(z_0(n),t_1(n),t_2^*(n))) . Combining the messages from Γ_ijn, {Φ_ijm} and the VFA multi-echo image prior Ξ_ijn, we can finally calculate the posterior distribution of z_ijn as follows: p(z_ijn|) ∝exp(Δ_Γ_ijn→ z_ijn+∑_mΔ_Φ_ijm→ z_ijn+logΞ_ijn(z_ijn)) ∝exp(Δ_Γ_ijn→ z_ijn)·exp(-1/πτ_1(z_ijn)|z_ijn-μ_1(z_ijn)|^2) ≈exp(-1/πτ_(z_ijn)|z_ijn-μ_(z_ijn)|^2) , where μ_(z_ijn), τ_(z_ijn) are the corresponding mean and variance: μ_(z_ijn) = f_ij(z_0(n),t_1(n),t_2^*(n))·μ_1(z_ijn)/|μ_1(z_ijn)| τ_(z_ijn) = τ_1(z_ijn) . The rest message passing steps between z_ijn and τ_w are the same as the conventional linear AMP discussed in <cit.> (see Section S-I of the Supporting Information). As mentioned earlier, the message passing process will be performed iteratively until the convergence is reached. The recovered tissue parameters 𝒯_n are given by their MAP estimations in (<ref>). § METHODS We collected in vivo 3D brain data using a 3T MRI scanner (Prisma model, Siemens Healthcare, Erlangen, Germany), after obtaining written consent from the subjects and receiving approval from the Institutional Review Board of Emory University. The data were acquired using a 32-channel head coil and the GRE sequence. Our objective was to reduce the scan time to approximately 10 minutes, which led us to explore the low-sampling-rate regime, where the undersampling rates varied among 10%, 15%, 20%. Both retrospective and prospective undersampling schemes were implemented in our experiments. In the retrospective scheme, a fully-sampled dataset was acquired during the scan and then retrospectively undersampled. The reconstructions from the fully-sampled data were used as ground-truth images for comparing different approaches. On the other hand, the prospective scheme involved real-time acquisition of the undersampled dataset. Since it lacked ground-truth references, its purpose was to validate the feasibility of performing undersampling in a clinical setting. Retrospective Undersampling: The k-space was fully sampled during the scan within an elliptical region of the y-z plane, as illustrated in Fig. <ref>. Subsequently, retrospective undersampling was performed in the y-z plane using the undersampling patterns shown in Fig. <ref>, while the readout x-direction was always fully sampled. For the estimation of sensitivity maps via ESPIRiT <cit.>, the central 24× 24 k-space was fully sampled. We employed variable-density and Poisson-disk undersampling patterns and compared their performances. Six subjects, denoted as “R0–R5”, were recruited for the study. Among them, one subject “R0” was randomly chosen as the training dataset (for approaches that required parameter-tuning), while the remaining subjects "R1-R5" served as the test dataset. The acquisition parameters were as follows * We included three flip angles = 5°, 10°, 20°; the number of echoes = 4, the first echo time = 7 ms, echo spacing = 8 ms; TR = 36 ms; the number of slices = 96, slice thickness = 1.5 mm; FOV = 256 mm × 232 mm, in-plane resolution = 1 mm × 1 mm, bandwidth per pixel = 260 Hz. The acquisition time was 32.83 minutes. Prospective Undersampling: The prospective protocols were implemented via pulse sequence programming using the “IDEA” platform from Siemens. The undersampling took place in the y-z plane in real time, and the readout x-direction was always fully sampled. Five subjects, denoted as “P1–P5”, were recruited for this study. The acquisition parameters were as follows * We included three flip angles = 5°, 10°, 20°; the number of echoes = 4, the first echo time = 7 ms, echo spacing = 8 ms; TR = 36 ms; the number of slices = 96, slice thickness = 1.5 mm; FOV = 256 mm × 232 mm, in-plane resolution = 1 mm × 1 mm, bandwidth per pixel = 260 Hz. When the undersampling rates vary in {10%, 15%, 20%}, the acquisition times were 5.43, 7.43 and 9.43 minutes respectively. The double-flip angle methods were employed to measure B1+ field using a echo planar imaging sequence <cit.>. The obtained B1+ field was then combined with Bloch-simulation of the slice-profile of the slab-selective radio-frequency pulse in 3D GRE sequence to calculate a spatially resolved flip-angle map. With retrospective undersampling, we investigated the use of complementary undersampling patterns shown in Fig. <ref> for data acquisition through the following undersampling schemes: * U1: The sampling patterns are complementary across different flip angles and echo times. * U2: The sampling patterns are complementary across different flip angles, but the same across different echo times. * U3: The sampling patterns are the same across different flip angles, but complementary across different echo times. * U4: The sampling patterns are the same across different flip angles and echo times. After the best undersampling schemes for reconstruction were determined, we applied them in prospective undersampling. The Daubechies wavelet family was chosen to obtain the sparse representation of an image <cit.>. The orthogonal “db1-db10” wavelet bases are commonly used, with the complexity of the basis increasing with its order. For the reconstructions of R_2^* map and QSM <cit.>, it was observed that employing a higher-order wavelet basis generally resulted in improved image quality. In our experiments, we utilized the db6 basis with 4 levels to strike a balance between wavelet complexity and image quality. §.§ Reconstruction Approaches We conducted a comparison between the proposed “AMP with built-in parameter estimation” (AMP-PE) approach and the baseline least squares (LSQ) approach, as well as the state-of-the-art l_1-norm regularization (L1) approach <cit.>. * The least squares approach: min__ij _ij-_ij_ij_2^2 min_z_0,t_1,t_2^* ∑_ij(f_ij(z_0,t_1,t_2^*)-|z_ij|)^2 . The least squares approach does not require parameter tuning, and the solutions can be obtained using gradient descent. Specifically, the recovery of z_0, t_1, and t_2^* is performed sequentially until convergence. As mentioned earlier, the recovery of z_0 is a convex problem and can be easily solved. On the other hand, the recovery of t_1 and t_2^* is a nonconvex problem, but it can still be efficiently solved through exhaustive search within the predefined intervals. * The l_1-norm regularization approach: min__ij _ij-_ij_ij_2^2+κ·_ij_1 min_z_0,t_1,t_2^* ∑_ij(f_ij(z_0,t_1,t_2^*)-|z_ij|)^2 , where _ij = _ij^-1, κ is the regularization parameter. The l_1-norm of the wavelet coefficients was employed as the regularizer to encourage sparse solutions. The parameter κ=5e-2 was tuned on the training dataset R0 to achieve optimal performance. We utilized the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to solve (<ref>) <cit.>. The recovery of z_0, t_1, and t_2^* was also carried out sequentially until convergence. * In the proposed AMP-PE approach, when the undersampling rate is low, the damping operation is necessary to stabilize the AMP update of the wavelet coefficients <cit.>. Let μ_d^(t)(v) denote the damped solution in the previous t-th iteration, and μ^(t+1)(v) denote the undamped solution in the (t+1)-th iteration. The damping operation simply proceeds as follows: μ_(z_ijn)^(t+1)(d) = μ_(z_ijn)^(t)(d)+α·(μ_(z_ijn)^(t+1)-μ_(z_ijn)^(t)(d)) , where α∈(0,1] is the damping rate, μ_(z_ijn)^(t+1)(d) is the damped solution in the (t+1)-th iteration. The damping rate α can be regarded as the step size of this iterative update. For a sampling rate of 10%, we select α=0.5 to slow down the iterative update. However, for relatively higher sampling rates (≥ 15%), we can omit the damping step and choose α=1. The L1 approach requires parameter tuning on a training dataset acquired under the same setting as the test data. The LSQ approach, on the other hand, does not require parameter tuning. The AMP-PE approach automatically and adaptively computes the MAP estimations of distribution parameters θ,τ_w. This characteristic makes it a convenient choice for clinical settings across various acquisition protocols and scanners. Unlike the LSQ and L1 approaches, which separate the recovery of VFA multi-echo images from the recovery of tissue parameters, the AMP-PE approach jointly recovers them by combining the VFA multi-echo image prior with the signal prior. § RESULTS We first compare different approaches and undersampling schemes using the variable-density pattern. We subsequently highlight the performance differences between the variable-density and Poisson-disk patterns. §.§ Retrospective Undersampling with the Variable-density Pattern Using a brain mask, we computed the normalized root mean square error (NRMSE) within the brain region. Specifically, the reciprocal of the T_2^* map, referred to as the R_2^* map, is frequently employed in brain studies <cit.>. Therefore, we computed the NRMSE with respect to the R_2^* map in this paper. Table <ref> presents the NRMSEs of the recovered T_1, R_2^*, and proton density maps for subject R1. Due to space constraints, the results for the remaining subjects, R2 to R5, are provided in Tables S1 to S4 in the Supporting Information. Across various sampling rates, both the L1 and AMP-PE approaches generally outperformed the LSQ approach. When the sampling rates were relatively lower at 10% and 15%, AMP-PE exhibited superior performance over L1, owing to its joint reconstruction of the tissue parameters. At a relatively higher sampling rate of 20%, both AMP-PE and L1 yielded comparable results. Different sampling schemes had varying effects on the recovered tissue parameters. Particularly, at low sampling rates of 10% and 15%, the performance differences among the schemes became more evident. Regarding T_1 mapping, schemes U3 and U4 exhibited comparable performance, surpassing the performance of schemes U1 and U2. Concerning R_2^* mapping, schemes U1 and U2 showed similar performance, significantly outperforming schemes U3 and U4. In the case of proton density mapping, schemes U1 and U2 performed similarly well, outshining schemes U3 and U4. As an example, we present the recovered images and the corresponding absolute errors for one slice of the 3D brain image from subject R1 at a sampling rate of 10%. They are shown in Fig. <ref>–Fig. <ref>. Due to space constraints, the recovered images at sampling rates of 15% and 20% are provided in Fig. S2–S7 in the Supporting Information. Visual inspection of the images aligns with the quantitative NRMSE findings. The reconstruction experiments were conducted on the MATLAB platform using a machine (Intel Xeon Gold 5218 Processor, 2.30GHz) with 200 Gb RAM. Using the 15% case as an example, we compared the runtime of each method. The LSQ, L1 and AMP-PE approaches took 1.09, 4.23 and 5.67 hours respectively to complete the reconstruction process. §.§ Prospective Undersampling with the Variable-density Pattern The results obtained from retrospective undersampling revealed that schemes U1 and U2 exhibited similar performance in terms of R_2^* and proton density mapping, while schemes U3 and U4 demonstrated similar performance in T_1 mapping. Additionally, U2 and U4 proved to be more straightforward to implement within the acquisition protocol as they did not require different sampling patterns across echo times. Hence, for prospective undersampling in the clinical setting, we opted for U2 and U4. It is important to note that in this scenario, we lacked access to ground-truth reconstructions necessary for calculating NRMSE values. Taking one slice from the 3D brain image from the subjection P1 for example, we showcase the recovered images using the L1 and AMP-PE approaches when the sampling rate was 10% in Fig. <ref>. The recovered images for sampling rates of 15% and 20% are provided in Fig. S8–S9 in the Supporting Information. A visual inspection indicates that the prospective undersampling scheme yields comparable and consistent results to the retrospective case. §.§ Comparison of Variable-density and Poisson-disk Sampling Patterns By utilizing the AMP-PE reconstruction approach with U2 and U4 as the undersampling schemes, we can emphasize the performance distinctions between variable-density (VD) and Poisson-disk (PD) patterns, as showcased in Table <ref>. Comprehensive results pertaining to PD, including other approaches and sampling schemes, can be found in Tables S5-S10 of the Supporting Information. Notably, when the sampling rate was 10%, VD exhibited significantly superior performance compared to PD. Conversely, at sampling rates of 15% and 20%, PD outperformed VD. For instance, considering a selected slice from the 3D brain image of subject P1, the recovered T_1 maps are shown in Fig. <ref>, while the recovered R_2^* and proton density maps are shown in Fig. S10-S11 of the Supporting Information. Visual inspection confirms that VD indeed outperforms PD in the case of 10% sampling rate. However, for the 15% and 20% cases, the superiority of one pattern over the other may vary depending on different brain regions, with PD achieving an overall lower NRMSE. § DISCUSSION The proposed nonlinear AMP-PE framework integrates the VFA multi-echo image prior and the signal model prior to jointly recover tissue parameters. Notably, we observe that the benefits of joint reconstruction are more pronounced at lower sampling rates, specifically 10% and 15%. At the 20% sampling rate, where more data are available, the VFA multi-echo image prior assumes a dominant role, surpassing the significance of the signal model prior. The signal model prior is enforced on the signal magnitudes of each voxel across different flip angles and echo times, representing a voxel-wise local prior. Conversely, the VFA multi-echo image prior originates from the sparse prior on image wavelet coefficients. As the wavelet transform is applied to the entire image, the sparse prior, and therefore the VFA multi-echo image prior, can be considered as global priors in this context. The convergence of AMP has only been established for linear random Gaussian measurement systems <cit.>. Establishing convergence guarantees for general measurement systems remains an open question. When the sampling rate was as low as 10%, we employed the damping operation on the wavelet coefficients in equation (<ref>) to ensure the stability of the nonlinear AMP-PE convergence. Additionally, by leveraging a dictionary-based exhaustive search, the nonlinear AMP-PE solves a nonconvex problem to reconstruct tissue parameters in an alternating fashion. It is worth noting that the initialization step plays a crucial role in achieving convergence and avoiding getting stuck in unfavorable local optima. In our study, we discovered that the least-squares solution served as a suitable initialization for AMP-PE. AMP-PE treats the distribution parameters λ, τ_w as variables and is capable of computing their posterior distributions p(λ|) and p(τ_w|) through message passing. However, unlike the posterior distributions of the actual “image” variables x, z, v, the distributions p(λ|) and p(τ_w|) are not approximated as Gaussians in AMP-PE. Therefore, accurately computing the MAP estimations of these parameters becomes challenging since closed-form solutions are typically unavailable. To address this, we relied on a second-order method to compute the MAP estimations of these parameters. To ensure a favorable starting point and avoid undesirable local optima, we initialized the distribution parameters using maximum likelihood estimations based on the least-squares solutions. Additionally, the damping operation can also be applied to the estimated parameters if necessary[In this study, we did not apply damping to the distribution parameters as the damping applied to the wavelet coefficients had already stabilized AMP-PE for the 10% sampling rate case.]. Experiments have revealed that both the sampling scheme and sampling pattern in a 3D GRE sequence have an impact on the reconstructed tissue parameters. There is no one-size-fits-all sampling scheme that is suitable for all types of reconstructions. Schemes U1 and U2 are better suited for T_2^* and proton density mapping, while U3 and U4 are more appropriate for T_1 mapping. In practical applications, we recommend adopting schemes U2 and U4 for easier implementation in prospective undersampling. As depicted in Fig. <ref>, the variable-density (VD) pattern acquires a greater number of low-frequency samples, whereas the Poisson-disk (PD) pattern acquires more high-frequency samples due to its uniform sampling in k-space. When the sampling rate is relatively low at 10%, the results in Table <ref> demonstrate that having more low-frequency measurements leads to better performance. However, as the sampling rates increase to 15% and 20%, high-frequency measurements become more influential, contributing significantly to image quality by capturing more detailed structural information. In prospective undersampling, we do not have access to ground-truth reference images. Fig. <ref> illustrates a comparison of the central k-space data acquired at different sampling rates. The absolute differences in k-space data between the undersampled and fully-sampled cases depend on both the signal magnitude and the sampling rate. Larger signal magnitudes within each undersampled case correspond to larger absolute differences. As the undersampling rate increases, the absolute difference decreases. Consequently, reconstructions based on fully-sampled data cannot serve as reliable ground-truth references in prospective undersampling scenarios, and quantitative evaluations can only be conducted in retrospective undersampling cases. § CONCLUSION We proposed a Bayesian formulation that combines the signal model and sparse prior on VFA multi-echo images to achieve a joint recovery of T_1, T_2^*, and proton density maps. This joint approach outperformed the decoupled methods in in vivo experiments. We employed AMP-PE for probabilistic inferences and the reconstruction of quantitative maps. AMP-PE offers automatic and adaptive parameter estimation capabilities, making it a convenient choice for clinical settings with varying acquisition protocols and scanners. Additionally, we explored the use of complementary undersampling patterns in quantitative MRI to further enhance image quality. Our experiments revealed that fixed sampling patterns across different echo times are suitable for T_1 mapping, while complementary patterns across different flip angles are beneficial for T_2^* and proton density mappings. Supporting Information Additional Supporting Information may be found online in the Supporting Information section. Supporting Tables S1–S4 Retrospective undersampling with the variable-density pattern: normalized root mean square errors of recovered T_1, R_2^* and proton density Z_0 maps from the subjects at different sampling rates (10%, 15%, 20%). Three reconstruction approaches (LSQ, L1, AMP-PE) with four undersampling schemes (U1–U4) are compared in this table. Supporting Tables S5–S9 Retrospective undersampling with the Poisson-disk pattern: normalized root mean square errors of recovered T_1, R_2^* and proton density Z_0 maps from the subjects at different sampling rates (10%, 15%, 20%). Three reconstruction approaches (LSQ, L1, AMP-PE) with four undersampling schemes (U1–U4) are compared in this table. Supporting Figure S1 The factor graph of the sparse signal recovery task under the AMP framework: “◯” represents the variable node, and “▪” represents the factor node. Supporting Figures S2–S7 Retrospective undersampling with the variable-density pattern at the 15% and 20% undersampling rates: recovered T_1, R_2^*, proton density maps and absolute error maps from the subject R1 using three reconstruction approaches (LSQ, L1, AMP-PE) and four undersampling schemes (U1–U4). Supporting Figures S8, S9 Prospective undersampling with the variable-density pattern at the 15% and 20% undersampling rates: recovered tissue parameters and absolute error maps from the subject P1 using two reconstruction approaches (L1, AMP-PE) and two undersampling schemes (U2, U4). Supporting Figures S10, S11 Retrospective undersampling: recovered R_2^*, proton density maps and absolute error maps from the subject R1 using the variable-density (VD) and Poisson-disk (PD) patterns at different sampling rates (10%, 15%, 20%). AMP-PE and U2 were chosen as the reconstruction approach and undersampling scheme respectively. When the sampling rate is 10%, VD achieved a lower NRMSE; when the sampling rates are 15% and 20%, PD achieved lower NRMSEs.
http://arxiv.org/abs/2307.00761v1
20230703053828
Learning Noise-Resistant Image Representation by Aligning Clean and Noisy Domains
[ "Yanhui Guo", "Xiaolin Wu", "Fangzhou Luo" ]
cs.CV
[ "cs.CV" ]
Recent supervised and unsupervised image representation learning algorithms have achieved quantum leaps. However, these techniques do not account for representation resilience against noise in their design paradigms. Consequently, these effective methods suffer failure when confronted with noise outside the training distribution, such as complicated real-world noise that is usually opaque to model training. To address this issue, dual domains are optimized to separately model a canonical space for noisy representations, namely the Noise-Robust (NR) domain, and a twinned canonical clean space, namely the Noise-Free (NF) domain, by maximizing the interaction information between the representations. Given the dual canonical domains, we design a target-guided implicit neural mapping function to accurately translate the NR representations to the NF domain, yielding noise-robust representations by eliminating noise regencies. The proposed method is a scalable module that can be readily integrated into existing learning systems to improve their robustness against noise. Comprehensive trials of various tasks using both synthetic and real-world noisy data demonstrate that the proposed Target-Guided Dual-Domain Translation (TDDT) method is able to achieve remarkable performance and robustness in the face of complex noisy images. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Computer systems organization Embedded systems [300]Computer systems organization Redundancy Computer systems organization Robotics [100]Networks Network reliability 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Learning Noise-Resistant Image Representation by Aligning Clean and Noisy Domains Fangzhou Luo August 1, 2023 ================================================================================= § INTRODUCTION In recent years, both supervised and unsupervised techniques for representation learning have been intensively researched. However, the popular techniques overlook the anti-noise property of learnt representations, especially their immunity to out-of-distribution noise. This problem raises the probability of performance decreases in applications and is still intractable. There are two common techniques to address the above issue. A typical type of approach is to use data augmentation during model training. Another option is to incorporate a restoration network as a preprocessing module into the inference pipeline. Due to the distributional disparity between the training and inference data, both of these techniques suffer from a well-known yet challenging generalization issue, which results in poor performance. Previous studies <cit.> have shown that a tiny but blind noisy disturbance can even throw off a well-trained model. Solving this challenge entails collecting a large number of noisy and clean image pairs from the inference scenarios for model training, real-world noisy-clean image pairs. This intuitive technique, however, is impracticable owing not only to the time-consuming data collection process but also to the complex noise models in different inference situations. The fact that there are an infinite number of noise model possibilities, but only a subset of them can be covered by simulation or data gathering, is the trickiest aspect of the aforementioned challenge. To overcome the above difficulty, it is therefore essential to find a domain mapping network that can project the homologous noisy images into an implicitly shared noise-free representation (or named intrinsic representation). Here, we define homologous noisy images as a set of noisy images that share the same image content (rotation-independent and translation-independent) but suffer from diverse noises. The intrinsic representation translated from the homologous noisy images should perform robustly for both seen and, more importantly, unseen noise disturbance. Prior arts such as <cit.> solve this problem by straightforwardly training a mapping network using paired data, but it often fails due to the learned mapping from noisy images to clean images heavily relies on the assumed training distribution of the paired data. Toward a more robust mapping, we propose a target-guided dual-domain translation technology. We first train a canonical space (the noise-robust (NR) domain) from the homologous noisy images by utilizing their self-consistent property that these noisy images share the same image content. The canonical space training only requires the NR domain to be optimized by the “noise-to-noise" paradigm rather than the traditional “noise-to-clean" paradigm, as opposed to depending on the training assumptions of paired noisy and clean images. Meanwhile, we separately train the corresponding noise-free (NF) domain by only using clean images. After that, we optimize a target-guided domain translation network to translate the NR representations of the canonical space into the NF representations of the clean space. It is self-evident that this mapping from a canonical space to a clean space performs more robustly than the conventional design of mapping from a diverse raw noisy space to a clean space, since the canonical space eliminates the noise diversity, yielding noise-robust representations. To achieve the aforementioned objectives, we devise an effective and readily achievable method for endowing image representations with noise resistance by interaction information maximization. First, an encoder E_ N and an encoder E_ C are trained to identify an NR domain and an NF domain, respectively. For the sake of simplicity, let us consider two homologous noisy images denoted by 𝐧^1_d and 𝐧^2_d. In the NR domain, these two homologous noisy images should share an intrinsic representation 𝐳_d. To this end, it requires a way to eliminate the noise redundancy between the two mutually redundant observations, 𝐧^1_d and 𝐧^2_d, while retaining the intrinsic information shared by them. It amounts to maximizing the interaction information I(𝐳_d;𝐧^1_d;𝐧^2_d), according to the information bottleneck principle <cit.>. In the NF domain, it is required to find noise-free representations 𝐳_c matched after translating the NR representations 𝐳_d. The challenge at hand is essentially identical to finding a surjective mapping function that can precisely translate the 𝐳_d to 𝐳_c (filtering out the redundant noise in the canonical space). However, the accurate mapping function is not easy to learn due to the high complexity of the NR and NF representations. To solve this problem, we propose a target-guided representation mapping network (RMN), denoted by 𝒯 that takes the NR representations 𝐳_d produced by the NR encoder E_ N and their pseudo NF representations 𝐳̃_c produced by the NF encoder E_ C as the input. The RMN model translates 𝐳_d with the guidance of 𝐳̃_c to the NF domain of E_ C. The RMN model can be trained using either synthetic or real image pairs. Fig <ref> illustrates the overall framework of our method. As introduced above, the proposed technology named Target-Guided Dual-Domain Translation (TDDT), follows a two-stage training methodology. First, we independently train an NR domain and an NF domain by self-supervised interaction information maximization. In the subsequent translation stage, the target-guided RMN model is trained to translate the representations from the NR domain to the NF domain using the pseudo-clean representations as guidance. Unlike adversarial learning methods <cit.>, which defend the attacks from specially optimized noise models, our method is a plug-in-play image representation boosting approach, improving the robustness of learned representations against more general noise in practice. We demonstrate the success of our technique in a variety of tests, including 1) self-supervised classification on noisy images, 2) zero-shot cross-domain retrieval on low-quality images, and 3) image segmentation and blind image denoising on synthetic and real-world noisy images. § RELATED WORK Information Bottleneck Principle Representation learning <cit.> aims to extract features from given data that are useful for prospective tasks. The information bottleneck (IB) principle <cit.>, an information-theoretic representation regularization method, is introduced in order to encode the minimal sufficient information of the given observations, which preserves as much intrinsic information as possible. Deep VIB approaches <cit.> are developed based on the IB to make the IB objective trainable by optimizing a variational lower bound of the objective. Some techniques <cit.> extend the VAEs <cit.> to invariant representation learning by disregarding domain-specific variations. However, these methods are primarily concerned with preserving the maximum amount of information from the observations, rather than eliminating the redundant noise. It makes the learned representations less robust, as this redundant noise is highly intertwined with the anticipated semantic information. Improving Image Representation Robustness Training the representation model with the data augmentation methods <cit.> is a straightforward and efficient method for enhancing the robustness of the image representation. However, this method may fail in the face of complicated degradation. As an alternative, image restoration (IR) models <cit.> are utilized as a preprocessing module to filter out the noise of the input images. However, due to insufficient real-world paired data for training, IR models that rely on synthetic data frequently suffer from the domain shift problem and perform poorly in real-world applications. In addition, the bulk of the IR models concentrates on increasing the recovered visual quality, such as GAN-based approaches <cit.>. These visual quality oriented methods could generate fake details such as artifacts that exacerbate the errors of subsequent tasks. Our method does not rely on a large amount of realistic training data but is robust to unobserved noise in practice. § APPROACH The proposed two-stage TDDT technology requires two self-supervised trained domain encoders: an NR domain encoder E_ N and an NF domain encoder E_ C. To train them, in the first training stage, we generate homologous noisy images 𝐧^1_d and 𝐧^2_d by degrading the noise-free images 𝐱_c. As shown in Fig <ref>, the homologous noisy images are used to train the NR domain encoder E_ N by maximizing the interaction information I(𝐳_d;𝐧^1_d;𝐧^2_d). The corresponding noise-free images 𝐱_c are used to train the NF domain encoder E_ C by maximizing the mutual information I(𝐳_c;𝐱_c). In the second training phase, both the domain encoders E_ N and the E_ C are frozen. We then separately train the guided domain translation network RMN 𝒯 to map the NR representations of the input noisy images into the target NF domain. §.§ Preliminary Mutual Information Maximization To make self-supervised domain training available, we should pave the way to computing the derivable mutual information. For the mutual information I(𝐱;𝐳) estimation, it is equivalent to computing the Kullback-Leibler (KL) divergence between the joint p(𝐱,𝐳)=p(𝐳|𝐱)p(𝐱) and the marginal p(𝐳)p(𝐱), where p(𝐱) denotes the data distribution. Thus, we have I(𝐱;𝐳) = 𝒟_KL(p(𝐳|𝐱)p(𝐱)||p(𝐳)p(𝐱)) where p(𝐳) is often assumed the Gaussian distribution. However, the above Eq. (<ref>) does not have a theoretical upper bound. Maximizing it cannot guarantee the convergence of the optimization process. Following previous works <cit.>, we exploit a non-KL divergence named Jensen-Shannon divergence (JSD) to compute the mutual information. With JSD, the Eq. (<ref>) satisfies I(𝐱;𝐳) = 𝒟_JSD(p(𝐱);p(𝐳)) ≥𝔼_x∼ p(𝐱)[-sp(-ℱ_ω(x))] - 𝔼_y∼ p(𝐳)[sp(ℱ_ω(y))] where sp(x)=log(1+e^x) . The discriminator function ℱ_ω <cit.> is modeled by a neural network with parameters ω. A more detailed derivation can be found in Sec.A of the supplementary material. NR Representation As we have mentioned earlier, the NR domain spanned by the encoder E_ N provides a canonical representation space for noisy images. In this domain, homologous noisy images can share a similar, or theoretically the same, representation, which gives rise to highly robust and accurate domain translation for the mapping network. Building the NR domain requires eliminating the redundant noise by maximizing the interaction information, I(𝐳_d;𝐧^1_d;𝐧^2_d) defined as follows I(𝐳_d;𝐧^1_d;𝐧^2_d) = I(𝐳_d;𝐧^1_d) - I(𝐳_d;𝐧^1_d|𝐧^2_d)_Redundancy = I(𝐳_d;𝐧^2_d) - I(𝐳_d;𝐧^2_d|𝐧^1_d) where Eq. (<ref>) holds due to symmetry. The conditional mutual information I(𝐳_d;𝐧^1_d|𝐧^2_d) represents the incremental mutual information between 𝐳_d and 𝐧^1_d when given 𝐧^2_d, the redundancy between 𝐧^1_d and 𝐧^2_d. Maximizing I(𝐳_d;𝐧^1_d;𝐧^2_d) encourages 𝐳_d to encode the noise-invariant information shared by the given homologous noisy images. It amounts to retaining representative information by maximizing I(𝐳_d;𝐧^1_d) while reducing the redundancy by minimizing I(𝐳_d;𝐧^1_d|𝐧^2_d). §.§ Learning Dual Canonical Domains NR Domain E_ N In our self-supervised setting, given a noise-free observation 𝐱_c, we generate two homologous noisy images 𝐧^1_d and 𝐧^2_d by randomly choosing two different noise models. These noisy views satisfy a mutual redundancy condition where they share the same image content but suffer from different superfluous noise. To find an NR representation 𝐳^1_d of 𝐧^1_d, according to Eq. (<ref>), the objective function is formulated as ℒ_n1 = I_ϕ(𝐳^1_d;𝐧^1_d|𝐧^2_d) - I_ϕ(𝐳^1_d;𝐧^1_d) where ϕ denotes the trainable parameters of the encoder E_ N. Symmetrically, from Eq. (<ref>), we can derive a loss for the second noisy view 𝐧^2_d: ℒ_n2 = I_ϕ(𝐳^2_d;𝐧^2_d|𝐧^1_d) - I_ϕ(𝐳^2_d;𝐧^2_d) The objective of NR domain training is the average of Eq. (<ref>) and Eq. (<ref>). To make the optimization trainable, we minimize the upper bound of the interaction information. The derived objective function is defined as follows ℒ_N = - I_ϕ(𝐳^1_d;𝐧^1_d) + I_ϕ(𝐳^2_d;𝐧^2_d)/2 + λ𝒟_SKL[p_ϕ(𝐳^1_d|𝐧^1_d)||p_ϕ(𝐳^2_d|𝐧^2_d)] where the coefficient λ regulates the trade-off between the information remaining and the redundancy disregarding. 𝒟_SKL denotes the symmetric KL divergence defined by the average of 𝒟_KL[p_ϕ(𝐳^1_d|𝐧^1_d)||p_ϕ(𝐳^2_d|𝐧^2_d)] and 𝒟_KL[p_ϕ(𝐳^2_d|𝐧^2_d)||p_ϕ(𝐳^1_d|𝐧^1_d)]. The full derivation of ℒ_ N can be found in the Sec.B of the supplementary material. The overall training process is shown in Fig <ref>. NF Domain E_ C For training the NF domain encoder E_ C, we maximize the mutual information I_ξ(𝐳_c;𝐱_c) between the clean representation z_c∼ p_ξ(𝐳_c|𝐱_c) and the noise-free image 𝐱_c. The objective function of E_ C is ℒ_C = - I_ξ(𝐳_c;𝐱_c) where ξ represents the parameters of the encoder E_ C. Both the NR domain representations and the NF domain representations are assumed to follow Gaussian distributions. Following <cit.>, all our mutual information estimators are based on the global mutual information between the representation and the input image as well as the local mutual information between the representation and the encoded shallow feature maps. §.§ Target-Guided Domain Translation Network For precise domain translation, we develop the target-guided domain translation module RMN, denoted by 𝒯, that maps the canonical NR representations to the NF domain, thereby further removing the noise redundancy in the NR representations. In RMN, we devise a novel pseudo-representation guided translation mechanism. We formulate the mapping process of the RMN network as follows: 𝒯(𝐳_d,𝐳̃_c): x ∼ p_ϕ(𝐳_d|𝐧_d) y ∼ p_τ(𝐳_τ|𝐳_d,𝐳̃_c) where p_ϕ(·) and p_ξ(·) represent the output distributions of the encoders E_ N and E_ C, respectively. The pseudo-clean representation, denoted by 𝐳̃_c, is obtained from E_ C using the same noisy input as the NR domain encoder E_ N. A subnetwork 𝒯_g takes the pseudo clean representation 𝐳̃_c as input and produces the guided information to yield the output distribution p_τ(·) of the RMN model. The training objective of RMN is to minimize the distribution difference between p_τ(𝐳_τ|𝐳_d,𝐳̃_c) and the ground truth distribution p_ξ(𝐳_c|𝐱_c). In the RMN model, the target-guided subnetwork 𝒯_g can consist of several novel guided processing layers which can be of either guided convolution (G-Conv) layers or guided fully connection layers (G-FC), depending on the feature types. Fig <ref> shows the convolution-based RMN network structure used in our experiments in Sec.<ref>. It consists of an RMN-Head (RMN-H) module, an RMN-Guidance (RMN-G) module (the 𝒯_g in Eq.(<ref>)), and an RMN-Tail (RMN-T) module. In the proposed 𝒯_g subnetwork, G-Conv layers are used to modulate the input features under the guidance of the pseudo-clean representations. In the G-Conv layer, the guided representation 𝐳̃_c is fed to convolution layers followed by the average pooling layer to obtain a modulation feature tensor. Then we reshape the feature tensor and pass it through an FC layer to predict the adaptive convolution kernels which are conditioned on the pseudo representation 𝐳̃_c. The input features are convoluted with the predicted convolution kernels to obtain the guided features ℱ_k. In addition, we devise a spatial attention branch. The guided feature ℱ_k and spatially attention-weighted features ℱ_a are fused to produce the mapped representations 𝐳_τ. The details of G-FC based RMN networks for our experiments can be found in our supplementary material. The rationale behind the guidance design is shown in Fig <ref>. In this figure, we visualize the correlation of the pseudo-clean representations and the ground truth representation by t-SNE <cit.>. As seen, the pseudo-clean representations 𝐳̃_c (green points) are clustered near the ground truth representation 𝐳_c (red star). Given that the objective of RMN is to map the representations from the NR domain of E_ N to the NF domain of E_ C, these pseudo-clean representations can thus serve as anchors to guide the domain mapping. For training the RMN model, noisy and clean image pairs (𝐧_d,𝐱_c) are generated to train the RMN network, while both the NR encoder E_ N and the NF encoder E_ C are fixed. Fig <ref> visualizes the training process. Making the learned representation adaptive for different downstream tasks also requires training an adaptor network jointly or separately. Thus, the objective function of the translation stage is defined as follows ℒ_𝒯 = γ_1 𝔼[𝐳_τ-𝐳_c_1] + γ_2 ℒ_ada(𝒜(𝐳_τ),𝐲) where ℒ_ada(·) denotes the task-related adaptor loss. 𝐳_τ = 𝒯(𝐳_d,𝐳̃_c). 𝒜(·) and 𝐲 represent the adaptor network and the task-related ground truth, respectively. For instance, in an image restoration task, the adaptor network is a reconstruction network that reconstructs the noise-free image from the translated representation 𝐳_τ; and 𝐲 is the ground truth noise-free image. In this case, the adaptor loss is ℓ_1-norm. For a classification task, the adaptor network is a classification network; 𝐲 is the ground truth class label, and the adaptor loss is the cross-entropy loss for classification. In our implementation, we empirically find that separate training is better than joint training. We thus first set γ_1=1,γ_2=0 to train the RMN network 𝒯, and we then fix 𝒯 and set γ_1=0,γ_2=1 to train the adaptor network for the downstream task. § EXPERIMENTS Without loss of generality, three different tasks are considered for experiments that include: 1) self-supervised classification of noisy images, which are used to evaluate the intrinsic information extraction of E_ N and the guided domain translation quality; 2) zero-shot cross-domain image retrieval experiments on low-quality images that are used to evaluate the semantic information preservation ability of TDDT method on cross-domain data; and 3) image segmentation, and image denoising experiments on real-world noisy datasets that are used to evaluate the image reconstruction quality of the TDDT representations. In our experiments, we follow the same setting for the NR domain training, if without additional notes. Two homologous noisy images 𝐧^1_d and 𝐧^2_d are generated by degrading a clean observation using a mixed degradation model. The mixed degradation model is comprised of random affine transformation, additive white Gaussian noise with a random standard deviation between 5 and 40, random downsampling with a maximum factor of 4, and JPEG compression with a quality factor between 20 and 70, where each degradation model is selected with a probability of 85%. §.§ Noise-Robust Self-Supervised Classification In this subsection, we experiment with our method on the self-supervised classification task where the test images are noisy. We compare our model to several representative IB-based approaches that employ the mutual information maximization strategy by measuring their robustness performance under varying levels of noise. Dataset The homologous noisy training datasets are generated from the training sets of Fashion-MNIST <cit.> and CIFAR-10 <cit.>. The noisy test datasets are generated by adding random additive white Gaussian noise with the standard deviation σ from 0 to 40. Implementation Details For a fair comparison, all the compared models are trained on the same datasets using the same encoder architecture as ours. In these experiments, both E_ N and E_ C consist of shallow FC layers with ReLU activations, generating 64-dimensional representations. As the representations are 1-D tensors, the convolution layers of RMN are replaced with FC layers named G-FC. The G-FC takes the feature concatenated with the guided pseudo representation 𝐳̃_c as the input to achieve the guidance mechanism. One can find the details of the network structure in Sec.C of the supplementary material. All the models are trained for 500 epochs with a batch size of 64 on the training set without labels. For the classification, the adaptor network is a logistic regression classifier that is trained with the fixed representation encoder. Following the evaluation standard of the unsupervised classification <cit.>, we report the accuracy of the models on the degraded test set. Results Table <ref> summarizes the results. As one can observe, our NF representations perform robustly at various noise levels, even on data with strong noise. As a comparison, the adopted competitive methods including β-VAE <cit.>, InfoMax <cit.>, DIM <cit.> and MISELBO <cit.>, suffer significant performance penalty. Moreover, our method as a plug-in-play module can improve the robustness of contrastive learning methods SimCLR <cit.>. In Table <ref>, we can observe that the model “Ours (E_ N)” that provides a canonical representation space is more robust to the added noise than the competitors. This verifies our interaction information maximization strategy. However, there is still noisy redundancy in the NR space. We thus adopt the target-guided RMN model to improve the representation quality by translating the NR representation into the NF space. As demonstrated by the results of the model “Ours (TDDT)”, it can further enhance the noise-resistance of the representations. §.§ Robust Zero-Shot Cross-Domain Retrieval For the experiments in this subsection, we adopt the zero-shot cross-domain image retrieval (ZS-CDIR) task, where the ZS-CDIR task is an extension of the sketch-based image retrieval <cit.>. During training, none of the retrieval classes in the test set are accessible when training a model. During the test (retrieval) phase, the ZS-CDIR task is a ranking of neutral images according to the unseen test (query) sketch. In these experiments, the challenge for TDDT is extended to learning intrinsic representations in cross-domain homologous noisy images. For accurate retrieval, it entails noise-robust representation extraction, which can overcome the cross-domain problem while resisting the noise. Datasets The Sketchy dataset <cit.> contains 12,500 images and 75,471 freehand sketches distributed in 125 classes. As in <cit.>, we adopt the Sketchy-extended dataset. Finally, the used Sketchy-extended dataset has un-aligned 73,002 images and 75,479 sketches distributed in 125 different classes. To train the model without ground truth paired samples, we randomly select one photo and one sketch per category to constitute a training pair, which ensures that both views contain equivalent semantic information. In each pair, only the class of the objects is shared across the two domains. Implementation Details Following <cit.>, the 100 seen classes of sketches and images are used for training, and the remaining 25 unseen classes are used for the test. We use the extracted features of images from the VGG16 model, which is finetuned on the trained set of the Sketchy-extended dataset. The resulting flattened 512-dimensional vectors are the inputs of our noise-robust encoder E_ N. In the experiments for the sketch retrieval, we only train an NR encoder, which consists of two hidden FC layers of 512 units and an FC layer of 64 units, to produce 64-dimensional sketch representations. While for the RGB image representations, we train a TDDT network to improve the robustness of the representation. After the noise-robust training, we follow <cit.> to finetune the representation networks for the zero-shot retrieval by multi-view learning. One can find more details about the procedure and network architecture in Sec.C of the supplementary material. Reuslts We conduct the retrieval by computing the cosine similarities between the representations extracted by the trained models. For the comparison, we adopt CAAE <cit.>, FRWGAN <cit.>, SAE <cit.>, JLSE <cit.>, ZSIH <cit.>, SEM-PCYC <cit.>, LCALE <cit.>, IIAE <cit.>, SBTKNet <cit.> and TVT <cit.> as competitors, which are elaborately designed for ZS-CDIR or general zero-shot learning. We choose the mean average precision (mAP@all) and the precision, considering the top 100 (Precision@100) retrievals as the evaluation metric. Table <ref> reports the quantitative results. Our model outperforms all the competitive state-of-the-art methods, even though some of them utilize external information to increase their performance. The results verify that our method can effectively disentangle cross-domain semantic information and noisy redundancy, and generalize well to unseen categories and unknown noise. Fig <ref> presents the qualitative results of ZS-CDIR. Additional visualized ZS-CDIR results are reported in Sec.E of the supplementary material. §.§ Reconstruction on Real-World Noisy Images In this subsection, we assess our technique using blind synthetic and real-world noisy images. The experiments include image segmentation and real-world image denoising. We evaluate the reconstructed images from TDDT representations for segmentation precision and denoising visual quality. Intuitive alternative solutions include cascading an image restoration model to enhance image quality and data augmentation for training. We thus compare our TDDT method with these two types of methods. Datasets We use the DIV2K dataset <cit.> to produce homologous noisy images for training the NR encoder and the RMN translation model. For the image segmentation evaluation, we use a real camera noise model <cit.> to generate noisy sets by adding blind noise to the ADE20k <cit.> test set. For the evaluation of real-world noisy images, we adopt four real-world noisy datasets, including PolyU-Real <cit.>, NC12 <cit.>, Nam <cit.> and DND <cit.>, where the noisy images are captured in real-world and their noise signals are unknown. Implementation Details Our NR encoder adopts the same structure as that of Pix2Pix <cit.>. In our RMN network, we set the layer number parameters m_1=m_2=4 and m_3=2. After training the NR encoder and the RMN network, we fix them and train a decoder, symmetrical to the NR encoder, to reconstruct denoised images for image segmentation as well as for image denoising. For the image segmentation experiments, we add simulated noise to the original validation set of ADE20k to produce the noisy test sets. The parameters of the mimic noise are set the same as <cit.>. The test noise follows a different distribution from that for training, resulting in a blind evaluation. In the experiments, we utilize pre-trained segmentation models for assessment and fine-tune them using augmented training data with random noise, where the noise models used for data augmentation are the same as those used for training our domain translation model. The test image segmentation methods include NonLocalNet <cit.>, DANet <cit.>, SETR <cit.> and SegFormer <cit.>. In addition, we compare our TDDT technic to the solution of cascading an image restoration model. The compared image restoration models include Pix2Pix <cit.> and Uformer <cit.>. For the denoising experiments, we compare our TDDT model against the BM3D <cit.>, CBDNet <cit.>, Pix2Pix <cit.>, LIR <cit.>, MPRNet <cit.> and Uformer <cit.>. We measure their performance using non-reference image quality metrics, including NIQE <cit.>, NRQM <cit.>, and BIQA <cit.>, because reliable real-world ground truth images are unavailable. Results The quantitative segmentation results are shown in Table <ref> and the qualitative results are shown in Fig <ref>. As shown, the models, fine-tuned with data augmentation, still suffer significant performance drops when being tested on noisy data (the column “Noisy Images”). In contrast, our method can improve the noise-robust robustness of all the evaluated models. The denoising results are presented in Table <ref> and Fig <ref>. It can be observed that the recent image restoration models likely perform well on simulated data yet fail when applied to real-world images. But our model can achieve superior performance on real-world data despite being trained only on synthetic data. This domain gap is filled by training the canonical domain of E_ N, and the high-quality image reconstruction benefits from the accurate guided translation of the proposed RMN model. More results are presented in Sec.G of the supplementary material. Ablation Study We ablate the proposed G-Conv module and the innovative two-stage training strategy for the image restoration task. Table <ref> reports the quantitative results. The configuration “Auto-Encoder” denotes an auto-encoder model with the same structure, parameters, and training data as our model. All the models are trained on the DIV2K dataset and tested on the noisy images synthetically generated from Set14 <cit.> and BSD100 <cit.>. As expected, the G-Conv and RMN module significantly increase the domain mapping precision, especially benefitting the perceptual quality. § CONCLUSION Although existing deep networks can handle various difficult tasks after training with enough data, they are susceptible to catastrophic performance declines or failure when they are exposed to unintended noise perturbations that are out of training distortion. In this research, to improve the anti-noise capability of deep representations, we present a novel target-guided dual-domain translation method. It involves training a canonical NR domain, an associated NF domain, and a target-guided translation network to yield noise-robust representations for complicated and noisy images. Extensive empirical results on both synthetic and real-world noisy datasets collectively demonstrate the scalability and superior anti-noise performance of the proposed TDDT technology. ACM-Reference-Format
http://arxiv.org/abs/2307.07050v2
20230706175408
Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the Quantum Many-Body Schrödinger Equation
[ "Kirill Neklyudov", "Jannes Nys", "Luca Thiede", "Juan Carrasquilla", "Qiang Liu", "Max Welling", "Alireza Makhzani" ]
physics.comp-ph
[ "physics.comp-ph", "cs.LG", "physics.chem-ph" ]
Data processing of Visible Emission Line Coronagraph Onboard ADITYA–L1 C. Kathiravan, R. Ramesh August 1, 2023 ====================================================================== Solving the quantum many-body Schrödinger equation is a fundamental and challenging problem in the fields of quantum physics, quantum chemistry, and material sciences. One of the common computational approaches to this problem is Quantum Variational Monte Carlo (QVMC), in which ground-state solutions are obtained by minimizing the energy of the system within a restricted family of parameterized wave functions. Deep learning methods partially address the limitations of traditional QVMC by representing a rich family of wave functions in terms of neural networks. However, the optimization objective in QVMC remains notoriously hard to minimize and requires second-order optimization methods such as natural gradient. In this paper, we first reformulate energy functional minimization in the space of Born distributions corresponding to particle-permutation (anti-)symmetric wave functions, rather than the space of wave functions. We then interpret QVMC as the Fisher–Rao gradient flow in this distributional space, followed by a projection step onto the variational manifold. This perspective provides us with a principled framework to derive new QMC algorithms, by endowing the distributional space with better metrics, and following the projected gradient flow induced by those metrics. More specifically, we propose “Wasserstein Quantum Monte Carlo” (WQMC), which uses the gradient flow induced by the Wasserstein metric, rather than Fisher–Rao metric, and corresponds to transporting the probability mass, rather than teleporting it. We demonstrate empirically that the dynamics of WQMC results in faster convergence to the ground state of molecular systems. § INTRODUCTION Access to the wave function of a quantum many-body system allows us to study strongly correlated quantum matter, starting from the fundamental building blocks. For example, the solution of the time-independent electronic Schrödinger equation provides all the chemical properties of a given atomic state, which have numerous applications in chemistry and materials design. However, obtaining the exact wave function is fundamentally challenging, with a complexity scaling exponentially with the number of degrees of freedom. Various computational techniques have been developed in the past, including compression techniques based on Tensor Networks <cit.>, and stochastic approaches such as Quantum Monte Carlo (QMC) <cit.>. Quantum Variational Monte Carlo (QVMC) <cit.> is a well-known subclass of the latter that can, in principle, be used to estimate the lowest-energy state (i.e. ground state) of a quantum many-body system. The method operates by parameterizing the trial wave function and minimizing the energy of the many-body system w.r.t. the model parameters. The choice of parametric family of the trial wave function is a crucial component of the QVMC framework. Naturally, deep neural networks, being a family of universal approximators, have demonstrated promising results for quantum systems with discrete <cit.>, as well as continuous degrees of freedom <cit.>. However, the optimization process is challenging, especially for rich parametric families of the trial wave functions. This requires the use of advanced optimization techniques that take into account the geometry of the parametric manifold. The most common technique used in QVMC is referred to as `Stochastic Reconfiguration' (SR)  <cit.>, and can be seen as the quantum version of Natural Gradient Descent <cit.>. While for large neural networks with up to millions of parameters, efficient and scalable implementations of SR are available <cit.>, it is also possible to use approximate methods such as K-FAC <cit.>. Higher order optimization techniques are considered to be essential to obtain the necessary optimization performance to accurately estimate ground states of quantum many-body Hamiltonians (see e.g.  <cit.>). Therefore, studies of the optimization procedure are an important direction for further development of the QVMC approach. In this paper, we consider the energy minimization dynamics as a gradient flow on the non-parametric manifold of distributions. First, as an example of the proposed methodology, we demonstrate that the imaginary-time Schrödinger equation can be described as the gradient flow under the Fisher–Rao metric on the non-parametric manifold. Then, the QVMC algorithm can be seen as a projection of this gradient flow onto a parametric manifold (see <ref> for details). Second, the gradient flow perspective gives us an additional degree of freedom in the algorithm. Namely, we can choose the metric under which we define the gradient flow. Thus, we propose and study a different energy-minimizing objective function, which we derive as a gradient flow under the Wasserstein metric (or Wasserstein Fisher–Rao metric) <cit.>. In practice, we demonstrate that incorporating the Wasserstein metric into the optimization procedure allows for faster convergence to the ground state. Namely, we demonstrate up to 10 times faster convergence of the variance of the local energy for chemical systems. Intuitively, incorporating the Wasserstein metric regularizes the density evolution by forbidding or regularizing non-local probability mass “teleportation” (as done by Fisher–Rao metric). This might facilitate faster mixing of the MCMC running along with the density updates. § BACKGROUND §.§ Quantum variational Monte Carlo Consider a quantum many-body system subject to the Hamiltonian operator, which we will assume to be of the following form, H = -1/2∇_x^2 + V . where x a given many-body configuration and V is the potential operator. The time-dependent Schrödinger equation determines the wave function ψ(x, t) of the quantum system i / tψ(x,t) = H ψ(x,t) As is often the case, we will target the stationary solutions, for which we focus on solving the time-independent Schrödinger equation Hψ(x) = Eψ(x) where E is the energy of the state ψ. The ground state of a quantum system is obtained by solving the time-independent Schrödinger equation, by targeting the eigenstate ψ of the above Hamiltonian with the lowest eigenvalue (energy) E. Hereby, we must restrict the Hilbert space to wave functions that are antisymmetric under particle permutations in the case of fermionic particles, and symmetric for bosons. The latter takes into account the indistinguishability of the particles. Given the Born density q(x) = |ψ(x)|^2, the energy of a given quantum state can be rewritten in a functional form, E[ψ] = _q(x)[E_loc(x)], E_loc(x) [H ψ](x)/ψ(x) We will focus on the case where the Hamiltonian operator is Hermitian and time-reversal symmetric. In this case, its eigenfunctions and eigenvalues are real, and the energy can be recast into a functional of the Born probability density (see also  <cit.>, where the expressions are given in terms of logψ) E[q] = _q(x)[E_loc(x)], E_loc(x) = V(x) -1/4∇_x^2 log q(x) - 1/8∇_x log q(x)^2, under the strong condition that q(x) is the Born probability density derived from an (anti-)symmetric wave function: q(x) = ψ^2(x). The latter will always be tacitly assumed from hereon. The Rayleigh–Ritz principle guarantees that the E[q] is lower-bounded by the true ground-state energy of the system, i.e. E[q] ≥ E_0, if the corresponding wave function ψ is a valid state of the corresponding Hilbert space. Quantum Variational Monte Carlo (QVMC) targets ground states by parametrizing the wavefunction ψ(x,θ) and by minimizing E[q(θ)]. The solution to the minimization problem θ_0 = θ E[q(θ)] is obtained by gradient-based methods using the following expression for the gradient w.r.t. parameters θ ∇_θ E[q(θ)] = _q(x,θ)[ (E_loc(x, θ) - _q(x,θ)[E_loc(x, θ)]) ∇_θlog q(x,θ) ]. In sum, the above leads to an iterative procedure in which Monte Carlo sampling is used to generate configurations from the current trial state q(x, θ) = ψ^2(x, θ), which allows computing the corresponding energy and its parameter gradients, and to update the model accordingly. In practice, the parametric model specifies the density q(x,θ) only up to a normalization constant, i.e., it outputs q̃(x,θ) ∝ q(x,θ). However, the gradient w.r.t. θ does not depend on the normalization constant; hence, throughout the paper, we refer to the model as the normalized density q(x,θ). §.§ Gradient flows under the Wasserstein Fisher–Rao metric In the previous section, we introduced QVMC in terms of Born probability functions and formulated the problem as the minimization of a functional of probability functions constrained to a variational/parametric manifold. The latter is a more common problem often tackled in machine learning, and by forging connections between both fields, we will be able to derive an alternative to QVMC. Gradient Flows In Euclidean space, we can minimize a function f:ℝ^d →ℝ by following the ODE / tx_t=-∇_x f(x_t), which can be viewed as the continuous version of standard gradient descent. Similarly, we can minimize a functional in the space of probability distributions (or in general any Riemannian manifold), by following an ODE on this manifold. However the notion of a gradient on a manifold is more complicated, and relies on the Riemannian metric that the manifold is endowed with. Different Riemannian metrics result in different gradient flows, and consequently different optimization dynamics. For a thorough analysis of gradient flows, we refer the reader to <cit.>. Wasserstein Fisher–Rao gradient flows Consider the space of distributions 𝒫_2 with finite second moment. This space can be endowed with a Wasserstein Fisher–Rao metric with the corresponding distance. In particular, the Wasserstein Fisher–Rao (WFR) distance <cit.> is defined by extending the <cit.> dynamical optimal transport formulation by a term involving the norm of the growth rate g_t, and by accounting for the growth term in the modified continuity equation. Namely, the distance between probability densities p_0 and p_1 is defined as p_0p_1^2 inf_v_t,g_t,q_t∫_0^1 𝔼_q_t(x)[ v_t(x)^2 + λ g_t(x)^2 ] , subj. to q_t(x)t = - q_t(x) v_t(x) + g_t(x)q_t(x) , and q_0(x) = p_0(x), q_1(x) = p_1(x) , where v_t(x) is the vector field defining the probability flow, g_t(x) is the growth term controlling the creation and annihilation of the probability mass, and λ is the coefficient balancing the transportation and teleportation costs. Note that by setting one of the terms to zero we get 2-Wasserstein distance (g_t(x) ≡ 0) and Fisher–Rao distance (v_t(x) ≡ 0). In <ref>, we also consider the general case of c-Wasserstein distance, where c is a convex cost function on the tangent space. Given a functional on this manifold, F[q]: 𝒫_2 →ℝ, we can define the gradient flow of the function F under any Riemannian metric including the Wasserstein metric, the Fisher–Rao metric, or the Wasserstein Fisher–Rao metric. For example, the gradient flow that minimizes the functional F[q] under the Wasserstein Fisher–Rao metric is given by the following PDE (which is shown with detailed derivations in <ref>) q_tt(x) = -q_t(x)(-∇_x δ F[q_t]/δ q_t(x))_the continuity equation - 1/λ(δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)])_growth term q_t(x), where δ F[q]/ δ q is the first-variation of of F with respect to the L_2 metric. The physical explanation of the terms in Eq. (<ref>) is as follows. The continuity equation defines the change of the density when the samples x ∼ q_t(x) follow the vector field v_t(x) = -∇_x δ F[q_t]/δ q_t. The second term of the PDE defines the creation and annihilation of probability mass, and is proportional to the growth field g_t(x) = δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)]. Note that _q_t[g_t]=0, and so while mass can be “teleported”, the total mass (or probability) will remain constant. The two mechanisms can be considered independently by defining the evolution terms under the 2-Wasserstein and Fisher–Rao metrics respectively, i.e. q_tt(x) = -q_t(x)(-∇_x δ F[q_t]/δ q_t(x)), 2-Wasserstein Gradient Flow, q_tt(x) = - (δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)]) q_t(x), Fisher–Rao Gradient Flow. It now becomes evident that the stationary condition for all the considered PDEs is ∇_x δ F[q_t]/δ q_t(x) = 0 δ F[q_t]/δ q_t(x) ≡constant . In <ref>, we provide derivations illustrating that <ref> correspond to the gradient flow under the Wasserstein Fisher–Rao, Wasserstein, and Fisher–Rao metrics, respectively, and hence they all minimize F[q]. For detailed analysis, we refer the reader to <cit.>. § METHODOLOGY In <ref>, we first demonstrate that the imaginary-time evolution of the Schrödinger equation can be viewed as a gradient flow under the Fisher–Rao metric. Afterwards, in <ref>, we discuss how a density evolution can be projected to the parametric variational family and show that doing so for the Fisher–Rao gradient flow yields the QVMC algorithm. Taking this perspective, we propose the Wasserstein Quantum Monte Carlo by considering Wasserstein (and Wasserstein Fisher–Rao) gradient flows, followed by the projection onto the parametric manifold (see <ref>). §.§ Imaginary-Time evolution as the gradient flow under the Fisher–Rao metric The ground state of a quantum system can in theory be obtained by imaginary-time evolving any valid quantum state ψ (with a non-vanishing overlap with the true ground state) to infinite times. The state is evolved according to the imaginary-time Schrödinger equation, which defines the energy-minimizing time evolution of the wavefunction ψ_t, and is expressed as the following PDE (which is the Wick-rotated version of Eq. (<ref>), see e.g. <cit.>), ψ_t(x)t =   -( H - E[ψ_t] )ψ_t(x), where again q_t(x) = ψ_t^2(x). The last term proportional to the energy E[ψ_t] comes from enforcing normalization (contrary to real-time evolution, imaginary time evolution is non-unitary). <ref> defines the gradient flow of the energy functional 𝔼[q] under the Fisher–Rao metric. The energy functional E[q] has the following derivative δ E[q]/δ q(x) = V(x) -1/4∇_x^2 log q(x) - 1/8∇_x log q(x)^2 = E_loc(x). Thus, the gradient flow under the Fisher–Rao metric is (see Eq. (<ref>)) q_t(x)t = -(E_loc(x) - _q_t(x)[E_loc(x)])q_t(x), which is equivalent (up to a multiplicative constant) to the imaginary-time Schrödinger Equation in Eq. (<ref>) as shown in the complete proof in <ref>. We believe that this result can be derived following the derivations from <cit.>, but not introducing the manifold of parametric distributions. However, considering the evolution of the density on the non-parametric manifold first helps us to derive our method and relating it to QVMC. In the following subsection, we discuss how to project this non-parametric evolution to a parametric manifold. §.§ Following the gradient flow by a parametric model By choosing a metric in the distributional space and following the energy-minimizing gradient flows, we can design various algorithms for estimating the ground state wave function. Indeed, in principle, by propagating the samples or the density according to any gradient flow (e.g., <ref>), we can eventually reach the ground state. However, these dynamics are defined on the non-parametric and infinite-dimensional manifold of distributions, which do not allow tractable computation of log densities, and thus tractable evolution. Therefore, we project these dynamics onto the parametric manifold of our variational family, and follow the projected gradient flows instead, which is tractable. Suppose the current density on the parametric manifold is q_t(x) = q(x,θ) (see <ref>). We first evolve this density using a (non-parametric) gradient flow method (e.g., <ref>) for time Δ t, which will take q_t(x) off the parametric manifold to q_t+Δ t(x). We then have to update current trial model q(x,θ) to match q_t+Δ t(x) enabling us to propagate the density further. In order to do so, we define the optimal update of parameters Δθ^* as the minimizer of the Kullback-Leibler divergence between q_t+Δ t(x) and the distributions on the parametric manifold, i.e. Δθ^* = Δþ Δθ_2=1q_t+Δ t(x)q(x,θ + Δθ) . In practice, we evaluate the parameters update using the expansion of the KL-divergence from the following proposition. For q_t(x) = q(x,θ), the KL-divergence can be expanded as q_t+Δ t(x)q(x,þ+Δþ)=  - 12Δ tΔþ∫ q_t(x)_θlog q(x,þ)   + o(√(Δ t^2 + Δθ^2)) , where ·· denotes the inner product, and should not be confused with the bra-ket notation. See <ref> for the proof. Using this approximation, the optimal update from <ref> of parameters becomes Δθ^* = Δþ Δθ_2=1Δþ-∫ q_t(x)_θlog q(x,þ)∝∫ q_t(x)_θlog q(x,þ) . The following Corollary states that QVMC can be viewed as the projected gradient flow of the energy functional with respect to the Fisher–Rao metric. Consider the Fisher–Rao gradient flow (or imaginary time evolution, which is equivalent, as shown in <ref>). Then, the parameters update (<ref>) matches the gradient of the conventional QVMC loss, i.e. Δθ^* ∝ -_q_t(x)[(E_loc(x, θ) - _q_t(x)[E_loc(x, θ)]) ∇_θlog q(x,θ)]. This perspective lays the foundation for deriving our WQMC method in <ref>, by following the Wasserstein or WFR gradient flows rather than Fisher–Rao gradient flows. Natural Gradient Preconditioning In order to update the parametric model q(x,θ), instead of following the update in <ref>, we can exploit the information geometry of the statistical manifold of q(x,θ), and define the update using the Fisher information matrix ℱ_θ Δθ^* =   Δþ Δθ_=1Δþ-∫ q_t(x)_θlog q(x,þ)∝_þ^-1∫ q_t(x)_θlog q(x,þ) , ℱ_θ=  𝔼_q(x,þ)[∂/∂θlog q(x,þ)∂/∂θlog q(x,þ)^⊤]. This update is analogous to the natural gradient update <cit.>. Note that the choice of Fisher information as the metric on the statistical manifold for preconditioning the gradient and updating þ is independent of the choice of metric on the non-parametric Wasserstein manifold (e.g., Wasserstein or Fisher–Rao) for evolving q_t(x). In practice, we use Kronecker-factored approximation of the natural gradient (K-FAC) <cit.>. §.§ Wasserstein quantum Monte Carlo In the previous sections, we formulated imaginary-time evolution governed by the Schrödinger equation as the energy-minimizing gradient flow under the Fisher–Rao metric. Furthermore, we demonstrated that projecting the evolved density to the parametric manifold at every iteration corresponds to the QVMC algorithm. Naturally, we can consider another metric on the (non-parametric) space of distributions, which results in a different gradient flow and corresponds to a different algorithm. Namely, we propose to consider the gradient descent in 2-Wasserstein space as the energy-minimizing density evolution, as introduced in <ref>. The energy-minimizing 2-Wasserstein gradient flow is defined by the continuity equation q_t(x)t = -q_t(x) (-∇_x E_loc(x)) In <ref>, we show that δ E[q]/δ q = E_loc(x). Plugging this into the 2-Wasserstein gradient flow defined in <ref>, yields the result in <ref>. c-Wasserstein Metric This result can be further generalized to the c-Wasserstein metric with any convex cost function c: ℝ^d →ℝ on the tangent space. The c-Wasserstein distance between p_0 and p_1 is defined as follows p_0p_1 inf_v_t, q_t∫_0^1 𝔼_q_t(x)[ c(v_t(x)) ] , subj. to q_t(x)t = -q_t(x) v_t(x) , and q_0 = p_0, q_1 = p_1 . The energy-minimizing c-Wasserstein gradient flow is defined by the following equation q_t(x)t = -q_t(x) ∇ c^*(-2∇_x E_loc(x)) , where c^*(·) is the convex conjugate function of c(·), and ∇ c^*(y) is its gradient at y. See <ref>. <ref> can be viewed as a special case of <ref> where c(·)=·^2. Introducing a different c than L^2 norm translates to a non-linear transformation of the gradient -∇_x E_loc(x). In <ref>, we demonstrate how to choose c such that it corresponds to the coordinate-wise application of tanh to the gradient, which we use in practice. Finally, using <ref> in <ref>, we get the expression for the parameter update, i.e. Δθ^* ∝∫ q_t(x) ∇_θ⟨∇ c^*(-2 ∇_x E_loc(x)),∇_xlog q(x,θ)⟩. Similar to the discussion of the previous section for QVMC, we can precondition the gradient with the Fisher Information Matrix, exploiting the geometry of the parametric manifold. In <ref>, we provide a pseudocode for the proposed algorithms. The procedure follows closely QVMC but introduces a different objective. When using gradients both from <ref>, we follow the gradient flow under the Wasserstein Fisher-Rao metric with the coefficient λ. For λ→∞, the cost of mass teleportation becomes infinite and we use only the gradient from <ref>, which corresponds to the gradient flow under the c-Wasserstein metric (we refer to this algorithm as WQMC). For λ→ 0, the cost of mass teleportation becomes negligible compared to the transportation cost and the resulting algorithm becomes QVMC, which uses the gradient from <ref>. In practice, we consider the extreme cases (λ→ 0,∞) and the mixed case λ = 1. § EXPERIMENTS [CODE REPRODUCING EXPERIMENTS IS AVAILABLE AT HTTPS://GITHUB.COM/NECLUDOV/WQMCGITHUB.COM/NECLUDOV/WQMC] For the empirical study of the proposed method, we consider Born–Oppenheimer approximation of chemical systems. Within this approximation, the wave function of the electrons in a molecule can be studied separately from the wave function of the atomic nuclei. Namely, we consider the following Hamiltonian H = -1/2∇_x^2 + ∑_i < j1/x_i - x_j - ∑_i,IZ_I/x_i - X_I + ∑_I < JZ_I Z_J/X_I - X_J , where x_i are the coordinates of electrons, X_I, Z_I are the coordinates and charges of nuclei. The first kinetic term contains derivatives with respect to the electron positions x. Indeed, the positions of the nuclei are given and fixed, and we target the ground state of the electronic wave function ψ(x), which is an explicit function of the electron positions only. Solving the electronic Schrödinger equation is a notoriously difficult task, and is a topic of intense research in quantum chemistry and material sciences. Since electrons are indistinguishable fermions, we restrict the Hilbert space to states ψ that are antisymmetric under electron permutations (see <ref>). This can be achieved by incorporating Slater determinants into the deep neural network, which parametrizes the wave function ψ(x,θ), as proposed in various recent works <cit.>. The density is then given by the Born rule q(x,θ) = ψ(x,θ)^2. For all our experiments, we follow <cit.> and use the “psiformer” architecture together with preconditioning the gradients via K-FAC <cit.>. In our method, we apply several tricks which stabilize the optimization and improve convergence speed. Firstly, we have observed that applying a tanh non-linearity coordinate-wise to the gradient ∇_x E_loc(x) significantly improves convergence speed. This corresponds to a different cost function in the Wasserstein metric, as we discuss in <ref> and <ref>. Also, we remove samples from the batch whose norm ∇_x log q(x,θ) significantly exceeds the median value. Namely, we estimate the deviation from the norm as _q(x,θ)[∇_x log q(x,θ) - median(∇_x log q(x,θ))] and remove samples whose norm exceeds five deviations from the median. When including the gradient from <ref>, we clip the local energy values as proposed in <cit.>, i.e. by estimating the median value and clipping to five deviations from the median, where the deviation is estimated in the same way as for the norm of the gradient. We consider different chemical systems and compare against QVMC as a baseline. We run our novel method with the same architecture and hyperparameters as the baseline QVMC-based approach in <cit.>. For the chemical systems, we consider Be, and B atoms, the Li_2 molecule and the hydrogen chain H_10 from <cit.>. The exact values of energies for Be, B, Li_2 are taken from <cit.>, the exact value of the energy for H_10 is from <cit.>. All the hyperparameters and architectural details are provided in the supplementary material. In <ref>, we demonstrate the convergence plots for the baseline (QVMC) and the proposed methods (WQMC and W(FR)QM, see <ref>). For all the considered systems, both WQMC and W(FR)QMC yield more precise estimations of the ground state energy (the first row of <ref>). To assess convergence, we also monitor the variance of the local energy and the gradient norm of the local energy. As we discuss in <ref>, both metrics must vanish at the ground state. More fundamentally, the variance of the local energy can be shown to vanish for eigenstates of the Hamiltonian <cit.>, referred to as the zero-variance property. First, we point out that obtaining the ground states of the considered molecules with QVMC is challenging, and even with powerful deep-learning architecture, discrepancies remain with the ground state. Since we use existing state-of-the-art architectures as a backbone, our results are also limited by the limitations of the latter <cit.>. Developing novel architectures is out of the scope of this work. However, in <ref>, we clearly observe that both WQMC and W(FR)QMC yield significantly faster convergence of the aforementioned metrics compared to QVMC. In particular, for the larger molecules Li_2 and H_10, we observe that we consistently obtain lower energies within 10k steps (20k for H_10) with a more stable convergence. For the smaller molecules we observe that QVMC obtains lower energies in the first few iterations, but its convergence slows down significantly, after which our approach steadily yields improved energies below QVMC. Overall, our experiments demonstrates that taking into account the Wasserstein metric allows for faster convergence to accurate approximations of the ground state. See the final metrics in <ref>. § DISCUSSION AND CONCLUSION Conclusion In the current paper, we propose a novel approach to solving the quantum many-body Schrödinger equation, by incorporating the Wasserstein metric on the space of Born distributions. Compared to the Fisher–Rao metric, which allows for probability mass “teleportation”, the Wasserstein metric constrains the evolution of the density to local changes under the locally-optimal transportation plan, i.e., following fluid dynamics. This property is favorable when the evolution of the parametric model is accompanied by the evolution of samples (performed by an MCMC algorithm). Indeed, by forbidding or regularizing non-local mass “teleportation” in the density change, one prevents the appearance of distant modes in the density model, which would require longer MCMC runs for proper mixing of the samples. In practice, we demonstrate that following the gradient flow under the Wasserstein (or Wasserstein Fisher–Rao) metric results in better convergence to the ground state wave function. This is expected to be due to our proposed loss, which takes into account the gradient of the local energy and achieves its minimum when the norm of the gradient vanishes, therefore explicitly minimizing the norm of the local energy gradient. We believe that our new theoretical framework for solving the time-independent Schrödinger equation for time-reversal symmetric Hamiltonians based on optimal transport will open new avenues to develop improved numerical methods for quantum chemistry and physics. Connection to Energy-Based and Score-Based Generative Models The developed ideas of this paper, i.e., projecting gradient flows under different metrics onto a parametric family, can be extended to generative modeling by swapping the energy functional with the KL-divergence. More precisely, as we show in <ref>, using the KL-divergence as our objective functional, the Fisher–Rao gradient flow yields energy-based training scheme, while the 2-Wasserstein gradient flow corresponds to the score-matching, which is used for training diffusion generative models. § ACKNOWLEDGEMENT The authors thank Rob Brekelmans for helpful discussions. J.N. was supported by Microsoft Research. J.C. acknowledges support from the Natural Sciences and Engineering Research Council (NSERC), the Shared Hierarchical Academic Research Computing Network (SHARCNET), Compute Canada, and the Canadian Institute for Advanced Research (CIFAR) AI Chairs program. A.M. acknowledges support from the Canada CIFAR AI Chairs program. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute <www.vectorinstitute.ai/#partners>. icml2023 § GRADIENT FLOWS UNDER WASSERSTEIN FISHER–RAO METRIC We, first, remind the concept of the functional derivative. The change of the functional F[q]: 𝒫_2 →ℝ along the direction h can be expressed as F[q + h] = F[q] + F[h] + o(h), F[h]=∫ h(x) δ F[q]/δ q(x)_derivative. Consider a change of the density in time, the change of the functional can be defined through the differential as F[q_t + Δ t q_tt] = F[q_t] + Δ t· F[q_tt] + o(h), F[q_tt]=∫ q_t(x)tδ F[q_t]/δ q_t(x). In particular, we have / t F[q_t] = F[q_tt]=∫ q_t(x)tδ F[q_t]/δ q_t(x). §.§ Minimizing movement scheme for 2-Wasserstein distance and Kullback-Leibler divergence Gradient flow under W_2 Consider the following minimizing movement scheme (MMS) inf_q' F[q'] - F[q] + 1/2Δ tqq', where the change of the density is restricted to the continuity equation, i.e., q_tt = -q_t(x) v_t(x), and q'(x) = q(x) -Δ tq(x) v(x) + o(Δ t). Using the static formulation of W_2 distance, we have qq'^2 = ∫ q(x)x - T^*(x)^2 = Δ t^2∫ q(x)v^*(x)^2, where T^*(x) is the optimal transportation plan, and v^*(x) is the corresponding optimal gradient field. Thus, we can rewrite the MMS problem as inf_v F[q] - Δ t ∫ q(x) v(x)δ F[q_t]/δ q_t(x) - F[q] + Δ t/2∫ q(x)v(x)^2, inf_v∫ q(x)v(x)∇_x δ F[q_t]/δ q_t(x) + 1/2∫ q(x)v(x)^2, inf_v∫ q(x)v(x) + ∇_x δ F[q_t]/δ q_t(x)^2. From the last optimization problem, we have v(x) = - ∇_x δ F[q_t]/δ q_t(x). Gradient flow under KL Consider the following minimizing movement scheme (MMS) inf_q' F[q'] - F[q] + 1/2Δ tq'q, where the change of the density is restricted to the following weighting scheme q_tt =  -g_t(x)q_t(x), hence , q'(x) =  q_t(x) - Δ t q_t(x)g_t(x) + o(Δ t), and log q'(x) =  log q_t(x) - Δ t g_t(x) + Δ t^2/2^2log q_t(x)t^2 + o(Δ t^2). The KL-divergence is then q'q =  ∫ q'(x)( - Δ t g_t(x) + Δ t^2/2^2log q_t(x)t^2) + o(Δ t^2) =  - Δ t∫ q_t(x)g_t(x) + Δ t^2/2∫ q_t(x)^2log q_t(x)t^2   + Δ t^2 ∫ dx q_t(x)g_t(x) + o(Δ t^2) =  Δ t^2/2∫ q_t(x)g_t^2(x) + o(Δ t^2). In the last equation we are using the normalization condition, i.e., ∫ q_t(x)t =∫ g_t(x)q_t(x) = 0. Thus, we can rewrite the MMS problem as inf_g F[q] + Δ t ∫ dx g(x)q(x) δ F[q_t]/δ q_t(x) - F[q] + Δ t/4∫ dx q(x)g(x)^2, inf_g∫ dx q(x)g(x) (δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)]) + 1/4∫ dx q(x)g(x)^2, inf_g∫ dx q(x)[g(x) + 2(δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)])]^2. From the last optimization problem, we have g(x) = - 2(δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)]). Note, however, that ·· is not the same as the Fisher–Rao metric. The derivations here demonstrate that the Fisher–Rao gradient flow can be derived as the MMS scheme with the KL-divergence. §.§ Minimizing movement scheme for the Wasserstein Fisher–Rao metric Consider the Wasserstein Fisher–Rao distance p_0p_1^2 inf_v_t,g_t,q_t∫_0^1 𝔼_q_t(x)[ v_t(x)^2 + λ g_t(x)^2 ] , subj. to q_t(x)t = - q_t(x) v_t(x) + g_t(x)q_t(x) , q_0(x) = p_0(x), q_1(x) = p_1(x) . The minimizing movement scheme (MMS) for this distance is inf_q' F[q'] - F[q] + 1/2Δ tqq'^2, where the change of the density is given by the continuity equation with the growth term q_tt = -q_t(x) v_t(x) + g_t(x)q_t(x). For close enough q and q', qq'^2 can be estimated via the metric derivative μ_t'^2 that is defined as μ_t'^2 = (lim_Δ t→ 0q_tq_t+Δ t/Δ t)^2, hence, qq'^2 = Δ t^2μ_t'^2 = Δ t^2 ∫ dx q(x)[v^*(x)^2 + λ g^*(x)^2]. Thus, the MMS problem can be written as inf_g,v  F[q] - Δ t ∫ dx q(x) v(x)δ F[q_t]/δ q_t(x) + ∫ dx q(x)g(x) δ F[q_t]/δ q_t(x) - F[q] + Δ t/2∫ dx q(x)[v(x)^2 + λ g(x)^2], inf_g,v  ∫ dx q(x)v(x)∇_x δ F[q_t]/δ q_t(x) + ∫ dx q(x)g(x) (δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)]) + 1/2∫ dx q(x)[v(x)^2 + λ g(x)^2], inf_g,v  ∫ dx q(x)v(x) + ∇_x δ F[q_t]/δ q_t(x)^2 + λ∫ dx q(x)(g(x) + 1/λ(δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)]) )^2. From the last optimization problem, we have v(x) = - ∇_x δ F[q_t]/δ q_t(x), g(x) = - 1/λ(δ F[q_t]/δ q_t(x) - _q(y)[δ F[q_t]/δ q_t(y)]). Note that different values of λ result in different gradient flows. For instance, considering the limit λ→∞ we have g(x) → 0 and <ref> just becomes W_2 gradient flow, which is natural since we have an infinite penalty for the mass teleportation in our metric. Setting λ→ 0 requires some additional consideration, since then the growth term explodes, and all the mass will be teleported without any cost. Indeed, for λ→ 0, our metric does not penalize for the mass teleportation at all, but our change of density (<ref>) is still able to teleport mass, hence, it will be doing so “for free”. §.§ PDEs demonstrating the convergence Consider the change of the density q_t under the continuity equation with the vector field v_t(x) and the growth term g_t(x) q_tt(x) = -q_t(x)v_t(x) + g_t(x)q_t(x). Thus, the change of the functional F[q] is / t F[q_t] =   -∫ q_t(x)v_t(x)δ F[q_t]/δ q_t(x) + ∫ q_t(x)g_t(x) δ F[q_t]/δ q_t(x) =   ∫ q_t(x)v_t(x)∇_x δ F[q_t]/δ q_t(x) + ∫ q_t(x)g_t(x) δ F[q_t]/δ q_t(x). From this equation, we can clearly see that v_t(x) and g_t(x) derived in the previous section minimize F[q]. Indeed, taking v_t(x) = -∇_x δ F[q_t]/δ q_t(x), g_t(x) = -1/λ(δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)]), we get / t F[q_t] =   - ∫ q_t(x)∇_x δ F[q_t]/δ q_t(x)^2 - 1/λ∫ q_t(x)(δ F[q_t]/δ q_t(x) - _q_t(y)δ F[q_t]/δ q_t(y))^2   - 1/λ∫ q_t(x)(δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)])_=0_q_t(z)[δ F[q_t]/δ q_t(z)] ≤ 0. Note that the considered growth term preserves the normalization of the density, i.e., ∫ q_tt(x) = ∫ g_t(x)q_t(x) = -∫ q_t(x) (δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)]) = 0. Thus, our functional F[q] decreases when the density q_t evolves according to the PDE q_tt(x) = -q_t(x)(-∇_x δ F[q_t]/δ q_t(x))_the continuity equation - 1/λ(δ F[q_t]/δ q_t(x) - _q_t(y)[δ F[q_t]/δ q_t(y)])_growth term q_t(x), and reaches its stationary point when ∇_x δ F[q_t]/δ q_t(x)^2 = 0, i.e., δ F[q_t]/δ q_t(x)≡constant. Note, that in the same way we can consider the continuity equation and the growth term separately, which defines the gradient flows under 2-Wasserstein and Fisher–Rao metrics respectively. The corresponding PDEs are q_tt(x) =  -q_t(x)(-∇_x δ F[q_t]/δ q_t(x)), q_tt(x) =  - (δ F[q_t]/δ q_t(x) - _q_t(y)[ δ F[q_t]/δ q_t(y)]) q_t(x). § IMAGINARY-TIME SCHRÖDINGER EQUATION AS THE GRADIENT FLOW UNDER FISHER–RAO METRIC <ref> defines the gradient flow of the energy functional 𝔼[q] under the Fisher–Rao metric. First, we derive the functional derivative of the energy functional E[q]. We denote the differential of the functional F(q) along the direction h as F(q)[h] = ∫ h·δ F[q]/δ q, where δ F[q]/δ q is the functional derivative. Consider the energy functional E[q] = ∫ q [V -1/4∇_x^2 log q - 1/8∇_x log q^2]. The functional derivative of this functional is as follows E(q)[h] =  E(q+· h)|_ = 0 =  ∫ h [V -1/4∇_x^2 log q - 1/8∇_x log q^2] - ∫ q[1/4∇_x^2 h/q + 1/4⟨∇_x log q, ∇_x h/q⟩]. For the last term, we do integration by parts and get ∫ q[1/4∇_x^2 h/q + 1/4⟨∇_x log q, ∇_x h/q⟩] = -1/4∫ ∇_x q∇_x h/q + 1/4∫ ∇_x q∇_x h/q = 0. Thus, we have E(q)[h] = ∫ h [V - 1/4∇_x^2 log q - 1/8∇_x log q^2]_δ E[q]/δ q, and we see that the derivative coincides with the local energy, i.e., δ E[q]/δ q(x) = E_loc(x) = V(x) - 1/4∇_x^2 log q(x) - 1/8∇_x log q(x)^2. Using the results from <ref>, the energy-minimizing gradient flow under Fisher–Rao metric is q_t(x)t =   -[E_loc(x) - E[q_t]]q_t(x). Second, we derive the PDE for the time-evolution of the density q_t under the imaginary-time Schrödinger equation. ψ_tt =   1/2∇_x^2 ψ_t - (V-E[q_t])ψ_t 2ψ_tψ_tt =   ψ_t∇_x^2 ψ_t - 2(V-E[q_t])ψ_t^2 q_tt =   ψ_t∇_x^2 ψ_t - 2(V-E[q_t])q_t Using the identity ψ∇_x^2 ψ =   ψψ∇_xlog|ψ| = ψ∇_xψ∇_xlog|ψ| + ψ^2∇_x^2log|ψ| =   1/4∇_x q∇_xlog q + 1/2q∇_x^2log q = 1/4q∇_x log q^2 + 1/2q∇_x^2log q, we have q_tt =   -2[V - 1/4∇_x^2log q - 1/8∇_x log q_t^2 - E[q_t]]q_t q_t(x)t =   -2[E_loc(x) - E[q_t]]q_t(x), which is equivalent to <ref>. § FOLLOWING THE GRADIENT FLOW BY A PARAMETRIC MODEL For q_t(x) = q(x,θ), the KL-divergence can be expanded as q_t+Δ t(x)q(x,þ+Δþ)=  - 12Δ tΔþ∫ q_t(x)_θlog q(x,þ)   + o(√(Δ t^2 + Δθ^2)) We have q_t(x)q(x,þ) =  ∫ q_t(x)logq_t(x)q(x,þ) þq_t(x)q(x,þ) =   -∫ q_t(x)þlog q(x,þ) =  0 q_t(x)=q(x,þ) -∫ q_t(x)þlog q(x,þ) q_t(x) q(x,þ) q_t(x)q(x,þ) =  ∫ q_t(x)log q_t(x)-∫ q_t(x)log q(x,þ) =  ∫ q_t(x)log q_t(x)+∫ q_t(x)log q_t(x)   -∫ q_t(x)log q(x,þ) =  ∫ q_t(x)log q_t(x)+∫ q_t(x)   -∫ q_t(x)log q(x,þ) =  ∫ q_t(x)[1+logq_t(x)q(x,þ)] =  0 q_t(x)=q(x,þ) ∫ q_t(x)[1+logq_t(x)q(x,þ)] q_t(x) q(x,þ) þtq_t(x)q(x,þ) =  -∫ q_t(x)log q(x,þ) Thus at q_t(x)=q(x,þ) we have q_t+Δ t(x)q(x,þ + Δþ) ≈- 12Δ tΔþ∫ q_t(x)_θlog q(x,þ) . § ANALOGIES TO GENERATIVE MODELING LITERATURE To draw a connection to generative models literature, let us consider the KL-divergence as an objective to minimize (instead of the energy), i.e. F[q] = p(x)q(x), δ F[q]/δ q = -p(x)/q(x), where p(x) is the data distribution given empirically. Thus, we have two PDEs that define gradient flows of this functional. Using equations <ref>, we have q_tt(x) = (p(x)/q_t(x) - _q_t(y)[p(y)/q_t(y)]) q_t(x) = p(x) - q_t(x), Fisher–Rao Gradient Flow, q_tt(x) = -q_t(x)∇_x p(x)/q_t(x), 2-Wasserstein Gradient Flow. Having the functional minimizing PDEs, we can define the corresponding loss functions using <ref> and <ref>. For the Fisher–Rao gradient flow, we have Δθ^*_FR = -_p(x)∇_θlog q(x,θ) + _q_t(x)∇_θlog q(x,θ). Remember that in <ref> we use q_t(x) = q(x,θ) as the density equal to the model density but detached from the parameters θ. Hence, we have Δθ^*_FR = -∇_θ_p(x)log q(x,θ), which corresponds to the conventional energy-based models training <cit.>. For the 2-Wasserstein gradient flow, denoting the detached density as q_t(x) = q(x,θ), we have Δθ^*_W_2 =  -∇_θ∫ dx q_t(x)∇_x p(x)/q_t(x)∇_xlog q(x,θ) =  -∇_θ∫ ∇_x p(x)∇_xlog q(x,θ) + ∇_θ∫ p(x) ∇_xlog q_t(x)∇_xlog q(x,θ) =  ∇_θ1/2_p(x)∇_x log p(x) - ∇_x log q(x,θ)^2, which corresponds to the score-matching objective <cit.>. This objective is actively used in the diffusion-based generative models <cit.>. § C-WASSERSTEIN GRADIENT FLOW c-Wasserstein distance with the convex cost function c:ℝ^d →ℝ is defined p_0p_1 = inf_π∈Γ(p_0,p_1)∫π(x,y) c(x-y) , where Γ(p_0,p_1) is the set of all possible couplings of p_0 and p_1. The dynamic formulation of this distance is the following p_0p_1 inf_v_t, q_t∫_0^1 𝔼_q_t(x)[ c(v_t(x)) ] , subj. to q_t(x)t = - q_t(x) v_t(x) , and q_0 = p_0, q_1 = p_1 . The energy-minimizing c-Wasserstein gradient flow is defined by the following PDE q_t(x)t = -q_t(x) ∇ c^*(-2∇_x E_loc(x)) , where c^*(·) is the convex conjugate function of c(·). The movement minimizing scheme for ·· is the following optimization problem inf_q' F[q'] - F[q] + 1/2Δ tqq'. Assuming that the density changes according to the continuity equation q' = q - Δ t∇_xq(x) v(x), and Δ t is small enough so that v(x) defines the optimal transportation plan, we have inf_v   F[q] - Δ t∫q(x) v(x)δ F[q]/δ q(x) - F[q] + 1/2Δ tΔ t^2_q(x)c(v(x)) =  Δ tinf_v∫ q(x) v(x)∇_xδ F[q]/δ q(x) + 1/2_q(x)c(v(x)) =  1/2Δ tinf_v∫ q(x) [c( v(x)) - v(x)- 2∇_xδ F[q]/δ q(x)] =   -1/2Δ t∫ q(x) c^*(-2∇_xδ F[q]/δ q(x)) and the infimum is achieved at v(x) = ∇ c^*(-2∇_xδ F[q]/δ q(x)), which gives the formula for the vector field. Using the energy gradient from <ref> δ E[q]/δ q(x) = E_loc, we get the result. Coordinate-wise application of tanh to the vector field, i.e. q_t(x)t = -q_t(x) tanh(-∇_x E_loc(x)) , corresponds to gradient descent with c-Wasserstein distance, where c:ℝ^d →ℝ is the following cost function c(x) = ∑_i^d 1/2((x_i+1) log (x_i+1) + (1-x_i) log (1-x_i)) - dlog 2. Consider c^*(x) = ∑_i log(exp(x_i) + exp(-x_i)). It corresponds to applying hyperbolic tangent non-linearity coordinate-wise to the vector field field, i.e., ∂_i c^*(x) = exp(x_i) - exp(-x_i)/exp(x_i) + exp(-x_i) = tanh(x_i). The corresponding cost function c(x) is the following c(x) =  sup_y xy - c^*(y) =  sup_y ∑_i^d (x_i y_i - log(exp(y_i) + exp(-y_i))) =  sup_y ∑_i^d ((x_i+1) y_i - log(exp(2y_i) + 1)) //y_i = 1/2log1+x/1-x =  ∑_i^d ((x_i+1) 1/2log(1+x_i/1-x_i) - log(1+x_i/1-x_i + 1)) =  ∑_i^d 1/2((x_i+1) log (x_i+1) + (1-x_i) log (1-x_i)) - dlog 2.
http://arxiv.org/abs/2307.00568v1
20230702133028
Quantum interference between quasi-2D Fermi surface sheets in UTe2
[ "T. I. Weinberger", "Z. Wu", "D. E. Graf", "Y. Skourski", "A. Cabala", "J. Pospisil", "J. Prokleska", "T. Haidamak", "G. Bastien", "V. Sechovsky", "G. G. Lonzarich", "M. Valiska", "F. M. Grosche", "A. G. Eaton" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.str-el" ]
These authors contributed equally to this work. These authors contributed equally to this work. Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, United Kingdom National High Magnetic Field Laboratory, Tallahassee, Florida, 32310, USA Hochfeld-Magnetlabor Dresden (HLD-EMFL), Helmholtz-Zentrum Dresden-Rossendorf, Dresden, 01328, Germany Charles University, Faculty of Mathematics and Physics, Department of Condensed Matter Physics, Ke Karlovu 5, Prague 2, 121 16, Czech Republic Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, United Kingdom Charles University, Faculty of Mathematics and Physics, Department of Condensed Matter Physics, Ke Karlovu 5, Prague 2, 121 16, Czech Republic [email protected] Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, United Kingdom UTe_2 is a promising spin-triplet superconductor candidate for which high quality samples with long mean free paths have recently become available, thereby enabling quantum oscillation measurements to probe the Fermi surface of this material. The dimensionality of Fermi surface sections of a triplet superconductor can have important implications regarding the topological properties of the superconductivity. For example, UTe_2 has been proposed to possess a chiral superconducting order parameter, which could result in the formation of topologically protected Majorana surface states. Here, we report the observation of oscillatory components in the magnetoconductance of UTe_2 at high magnetic fields. We find that these oscillations are very well described by quantum interference between quasiparticles traversing semiclassical trajectories spanning magnetic breakdown networks. Our observations are fully consistent with a quasi-2D model of this material's Fermi surface based on prior dHvA-effect measurements. Our results indicate that UTe_2 – which exhibits incredibly complex physical properties – possesses a remarkably simple Fermi surface consisting exclusively of two quasi-2D cylindrical sections. Quantum interference between quasi-2D Fermi surface sheets in UTe_2 A. G. Eaton August 1, 2023 =================================================================== Young's double slit experiment represents a powerful example of the wave-particle duality of photons <cit.>. A century later Davisson and Germer observed a similar phenomenon involving the quantum mechanical interference of a beam of electrons incident on a crystalline target <cit.>. In the solid state, superconducting quantum interference devices provide exceptionally accurate measurements of magnetic flux via diffraction-modulated interferometry <cit.>. For the case of normal metals, the manifestation of quantum interference (QI) effects in the magnetoconductance was first predicted by Shiba and Fukuyama <cit.>, and soon thereafter experimentally realized by Stark and Friedberg in their measurements of the magnetoresistance of magnesium <cit.>. The concept of the Stark interferometer is premised on interference between semiclassical quasiparticle trajectories across magnetic breakdown (MB) networks connecting separate Fermi surface (FS) sections, yielding oscillations in the conductivity that are periodic in inverse magnetic field strength <cit.>. Since the seminal experiments by Stark and coworkers, quantum interference oscillations (QIOs) have been observed in a variety of materials <cit.> including, in particular, a number of organic metals with quasi-2D (Q2D) FSs consisting primarily of cylindrical pockets <cit.>. Unlike quantum oscillations (QOs) from the dHvA- or SdH-effects, in which phase coherence and Landau quantization of quasiparticles traversing orbits corresponding to closed FS sections thereby provide a direct measurement of the FS <cit.>, QIOs only yield an indirect probe of the FS, as their frequencies correspond to k-space orbits spanning separate FS sections. Therefore, QIOs are only observed in materials in which the k-space separation of FS sections is sufficiently small for quasiparticles to tunnel between FS sheets in accessible magnetic field strengths <cit.>. It is important to note that QI is exclusively a kinetic effect and is thus observable in the electrical transport – unlike the dHvA-effect, QIOs do not correspond to an oscillatory component of the free energy, therefore QI effects cannot be observed in bulk thermodynamic properties such as the magnetization <cit.>. Here, we report the observation of QIOs at high magnetic fields in contactless resistivity measurements of the heavy fermion actinide metal UTe_2. This material has recently showed promising signs of being a spin-triplet superconductor <cit.> – despite its magnetic groundstate being paramagnetic rather than ferromagnetic, as is the case for the analogous compounds UGe_2, URhGe and UCoGe <cit.>. Evidence indicating triplet pairing in UTe_2 comes from a number of sources including a negligible change in the NMR Knight shift on cooling through T_c <cit.> along with anisotropic upper critical fields that far exceed the Pauli limit for singlet pairing in all directions <cit.>. Recent advancements in the growth procedure of single crystal UTe_2 specimens has led to a marked enhancement in crystalline quality, enabling the observation of QOs from the dHvA-effect <cit.>. The angular profile of the dHvA data is indicative of a relatively simple Q2D FS, consisting of one electron-type and one hole-type cylinder, each hosting quasiparticles of heavy effective masses ∼ 40 m_e <cit.>. UTe_2 single crystals were grown by a molten salt flux technique <cit.> using the methodology detailed in ref. <cit.>. This technique has been shown to yield high quality specimens of T_c≈ 2.1 K with long mean free paths of the order of 100 nm <cit.>. Contactless resistivity measurements were performed in static fields to 41.5 T at the National High Magnetic Field Lab, Tallahassee, Florida, using the tunnel diode oscillator (TDO) technique <cit.>; similar measurements were obtained in pulsed fields to 70 T at the Hochfeld-Magnetlabor, HZDR, Dresden, using the proximity detector oscillator (PDO) technique <cit.>. Figure <ref> shows the background-subtracted TDO signal (Δ f_TDO) for magnetic field oriented 8 away from the crystalline c-axis towards the a-axis (θ_c = 8). The FFT of the Δ f_TDO data reveals four clear frequency branches, which we label as -. Notably, the FFT spectra at θ_c = 8 of the TDO signal is very different to the spectra we observed in our prior dHvA study at the same angle (ref. <cit.>), implying that these are not QOs stemming from the SdH-effect. Furthermore, the amplitude of dHvA QOs diminished by almost an order of magnitude between 19 mK and 200 mK – whereas here the signal is large and very well resolved at 400 mK. These observations indicate that the oscillations in f_TDO are likely QIOs not QOs, as QIOs generally correspond to reciprocal space areas constructed from sums and differences between FS sections, and often exhibit effective masses much lower than those of dHvA and SdH QOs <cit.>. Using our FS model from ref. <cit.>, we illustrate in Figs. <ref> & <ref> how the frequencies of the - FFT peaks correspond to k-space areas between the cylindrical Fermi sheets, which are centred at the centre and corners of the first Brillouin zone (BZ). Each of these frequency components can thus be well understood as coming from QI between two quasiparticles – one making two orbits around a FS cylinder, and the other traversing a MB network between two cylinders of the same carrier type. To show this, we consider the generalized theory of MB orbits given by Kaganov and Slutskin <cit.>. In a magnetic field B the oscillatory component of a kinetic coefficient, such as the electrical conductivity, is composed of combinatory harmonics of the form ∑_λ,λ^'exp[i ( ϕ_λ - ϕ_λ^')] = ∑_λ,λ^'exp( ic ħ/eB𝒜_λ,λ^') where λ and λ^' are the two semiclassical quasiparticle paths that share a common start and end point, enclosing between them an area in reciprocal space of 𝒜_λ,λ^' [Note that the calculation of 𝒜_λ,λ^' is dependent on both the number and the direction of the trajectories included in λ and λ^']. The change in phase of the semiclassical quasiparticle wave packet around path λ is simply ϕ_λ = ∮_λ k_y dk_x <cit.>. Take for example the area 𝒜_ shaded in Fig. <ref>, which sits at the corner of the first BZ. Writing the area of the hole-type FS cylinder as 𝒜_h^+, we can see that the area 𝒜_ is equal to the difference of the areas enclosed by the paths λ = ACDEA and λ^' = ABABA as 𝒜_𝒜𝒞𝒟ℰ𝒜 - 𝒜_ABABA = (2𝒜_h^+ + 𝒜_) - 2𝒜_h^+ = 𝒜_. Similarly, areas corresponding to the , and frequency components are formed by QI between the quasiparticle trajectories traced in Fig. <ref> [Each of 𝒜_λ = {𝒜_, 𝒜_, 𝒜_, 𝒜_} have two distinct QI paths corresponding to them, each requiring only 4 instances of MB, which we label as λ_1,2]. The probability of a quasiparticle traversing a path depends on the number of MB tunnelling events (each of probability p) and Bragg reflections (each of probability q) that are contained within the path. Expressing p =√(P) and q = i√(( 1-P )), where P = exp(-B_0/B) and B_0 denotes the breakdown field <cit.>, the probabilities for quasiparticles to traverse the paths λ = ACDEA and λ^' = ABABA, corresponding to the frequency in Fig. <ref>, are therefore q^4 p^4exp(i ϕ_λ) and q^8exp(i ϕ_λ^'), respectively. Due to this exponentially suppressed tunnelling probability – which necessitates the application of high magnetic fields – we limit our discussion just to the lowest order relevant networks as depicted in Fig. <ref>, each of which requires only 4 instances of MB. By Eqn. <ref>, we can see that the probability of quasiparticles traversing the paths in Fig. <ref> will involve oscillating terms including some proportional to cos[ϕ_λ-ϕ_λ^'] = cos[2π (2f_h^+ + f_ - 2f_h^+)/B] = cos[2π f_/B], which will contribute to the (real part of the) conductivity. Furthermore, in the low temperature limit (with phonons frozen out), the temperature dependence of QIOs simply follows the Lifshitz-Kosevich theory <cit.>. The effective quasiparticle mass is proportional to the dependence of the phase on the electron energy, E_k: ∂(ϕ_λ-ϕ_λ^')/∂ E_k = 2π c/e ħ B (m_λ^* - m_λ^'^*) where m_i^* denotes the effective mass of path i <cit.>. Note that it is the difference in the effective masses of the two interfering paths that determines the effective mass of QIOs – thus enabling QIOs to be observed to much higher temperatures than QOs from the dHvA- and SdH-effects <cit.>. Figure <ref> shows that the and frequencies in the QIO spectra of UTe_2 possess effective masses (≈ 5 m_e) almost an order of magnitude lower than those reported for dHvA QOs (∼ 40 m_e) <cit.>, showing that the subtraction of masses between the two trajectories in Eqn. <ref> has almost cancelled out. By contrast, the and frequencies are much heavier with masses in the region of 20–35 m_e (Figures <ref> and S3). This implies that these MB networks span FS sections with a highly anisotropic distribution of the Fermi velocity, v_F. This is consistent with several experimental <cit.> and theoretical <cit.> studies that indicate the hybridization between U f-electrons with the U d-bands and Te p-bands, which provides the dominant contribution to the Q2D FS, can result in significant variations in the effective quasiparticle masses at points around the cylindrical sheets. We note that our uncertainty in m^*_ and m^*_ is considerably larger than for m^*_ and m^*_ due to these frequencies only being observable right at the base temperature of the ^3He cryostat used for this measurement, with the uncertainty in temperature dominating the uncertainty in m^*_,. Further measurements in the experimentally challenging temperature–field regime of ≤ 200 mK and ≥ 40 T are required to carefully probe the anisotropy of v_F around the FS of UTe_2, and thus to better understand the hybridization of the f, d and p bands. In principle there is an infinite number of MB networks that could give rise to QIOs. Thus, it is expected that orbits of the type 𝒜_𝒜𝒞𝒟ℰ𝒜 - 𝒜_ABA = (2𝒜_h^+ + 𝒜_) - 𝒜_h^+ = 𝒜_ + 𝒜_h^+ should occur. However, the effective mass associated with these orbits would be greater than the masses of the hole and electron orbits from which they arise. If in the most simple case we assume that the breakdown orbits of type 𝒜_𝒜𝒞𝒟ℰ𝒜 have masses of 2m^*_h^+/e^- + ϵ_m, where ϵ_m is a small difference to account for the fact that quasiparticles are in fact not traversing full FS sheets, then by Eqn. <ref> these breakdown orbits need to interfere with two full FS sheet orbits to produce oscillations of m^*=ϵ_m. By comparison, orbits of the type 𝒜_𝒜𝒞𝒟ℰ𝒜 - 𝒜_ABA would instead have masses of m^*= m^*_h^+/e^- + ϵ_m and as such would be too heavy to observe at ^3He temperatures. Figure <ref> shows the evolution of QIO frequency with magnetic field tilt angle, and compares with the prediction from our Q2D FS model (in panel c). Although this is only a crude approximation of the expected QIO frequency profile, we find remarkably good agreement between our FS model adapted from ref. <cit.> and the QIOs we observe in TDO measurements. This result gives strong confidence that the FS of UTe_2 is very well described by our Q2D model. Our discussion so far has focussed on field aligned coaxially to the FS cylinders (along c), and at inclination angles close to c. Figure <ref> shows that for field oriented along the a-axis, two additional frequencies f_ = 220 T and f_ = 4.5 kT are observed. Again, the enclosed areas of these MB networks correspond very well to our Q2D FS model (Fig. <ref>f). The low frequency oscillations for field along a are of considerable amplitude, and are clearly observable in the raw TDO signal without background subtraction (Fig. <ref>a). Along the a-axis again corresponds to a QIO, whereas is consistent with a conventional MB orbit, which may explain its small amplitude as well its observation only directly at the a-axis. We note that a similar study of oscillations in the TDO signal of UTe_2 at high fields was recently reported <cit.>. For H ∥ a ref. <cit.> reports an oscillatory frequency of 223 T, in very good agreement with the 200 T  orbit we observe at this field orientation (Fig. <ref>). However, rather than being of a QI origin, the authors of ref. <cit.> interpreted the observed oscillatory waveform to comprise QOs from the SdH-effect caused by the presence of light 3D FS pocket(s). The distinction between Q2D and 3D FS dimensionality in the case of UTe_2 is important, as any 3D pockets could have significant implications regarding the topological properties of the putatively spin-triplet superconductivity <cit.>. However, in our measurements we do not observe any indication of the presence of a 3D FS pocket. Fig. <ref>b shows the evolution of Δ f_TDO as the field is tilted away from a towards c. For magnetic field oriented along the a-axis we observe low frequency large amplitude oscillations, in good agreement with the raw data presented in ref. <cit.>. A large oscillatory component is still visible 9 away from a; however, after a rotation of 20 (to θ_c = 70) no oscillations are observed within the resolution of the measurement. This is inconsistent with this frequency branch coming from SdH-effect QOs due to a 3D pocket; however, this behavior is consistent with a QI interpretation of the oscillatory origin, as the trajectory is only possible close to a. Furthermore, no slow oscillations at these tilt angles have been reported in prior dHvA measurements by the field modulation <cit.> or torque magnetometry <cit.> techniques – they appear only to be observed in the electrical conductivity, again consistent with a QI origin. The stark difference in the effective masses of the , and , components implies a strong anisotropy of v_F(k). In our recent study of dHvA QOs in UTe_2 we observed two-fold effective mass variations along the measured frequency branches under rotation away from the c-axis <cit.>. In order to attain such a variation, this implies a significant anisotropy of v_F(k_z), which in turn could account for the large difference in effective masses of the QIOs. Such a variation in effective mass likely stems from substantial hybridization between U d-bands and Te p-bands, which are the main contributors to the Q2D FS sheets <cit.>, and a spectral f-electron band sitting just above the Fermi level. This band has been detected in ARPES measurements, in which a significant spectral weight was observed at the Z-point of the BZ <cit.>. Models of UTe_2 that include the presence of such a band <cit.> show that the effect of the U f-electrons hybridizing with U d-bands is to compress them in energy, effectively increasing their band mass. A similar effect, albeit less pronounced, would also be relevant for the Te p-band. It is therefore likely that v_F is lowest (and thus m^* is highest) at the regions of the FS cylinders that are closest to the Z point, as here the spectral contribution of the f-electrons is largest and thus the hybridization with them will be the greatest. In summary, we measured the contactless resistivity of UTe_2 to high applied magnetic field strengths. We observed oscillatory components that are well explained by quantum interference between semiclassical quasiparticle trajectories spanning magnetic breakdown networks. We find that the quantum interference frequencies correspond very well to a quasi-2D model of the UTe_2 Fermi surface. Our observations give no indication of the presence of any 3D Fermi surface pockets. We are grateful to N.R. Cooper, D.V. Chichinadze, D. Shaffer, A.J. Hickey, H. Liu, P. Coleman, J. Chen, C.K. de Podesta, O.P. Squire, T. Helm, and especially A.F. Bangura for stimulating discussions. We thank T.J. Brumm and S.T. Hannahs for technical advice and assistance. This project was supported by the EPSRC of the UK (grant no. EP/X011992/1). A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779* and the State of Florida. We acknowledge support of the HLD at HZDR, a member of the European Magnetic Field Laboratory (EMFL). The EMFL also supported dual-access to facilities at MGML, Charles University, Prague, under the European Union's Horizon 2020 research and innovation programme through the ISABEL project (No. 871106). Crystal growth and characterization were performed in MGML (mgml.eu), which is supported within the program of Czech Research Infrastructures (project no. LM2023065). We acknowledge financial support by the Czech Science Foundation (GACR), project No. 22-22322S. T.I.W. acknowledges support from EPSRC studentship EP/R513180/1. Z.W. acknowledges studentship support from the Cambridge Trust (www.cambridgetrust.org) and the Chinese Scholarship Council (www.chinesescholarshipcouncil.com). T.I.W. and A.G.E. acknowledge support from QuantEmX grants from ICAM and the Gordon and Betty Moore Foundation through Grants GBMF5305 & GBMF9616. A.G.E. acknowledges support from the Henry Royce Institute for Advanced Materials through the Equipment Access Scheme enabling access to the Advanced Materials Characterisation Suite at Cambridge, grant numbers EP/P024947/1, EP/M000524/1 & EP/R00661X/1; and from Sidney Sussex College (University of Cambridge).
http://arxiv.org/abs/2307.02993v2
20230706135447
Biorthogonal dynamical quantum phase transitions in non-Hermitian systems
[ "Yecheng Jing", "Jian-Jun Dong", "Yu-Yu Zhang", "Zi-Xiang Hu" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China [][email protected] Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China [][email protected] Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China [][email protected] Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China By using biorthogonal bases, we construct a complete framework for biorthogonal dynamical quantum phase transitions in non-Hermitian systems. With the help of associated state which is overlooked previously, we define the automatically normalized biorthogonal Loschmidt echo. This approach is capable of handling arbitrary non-Hermitian systems with complex eigenvalues, which naturally eliminates the negative value of Loschmidt rate obtained without the biorthogonal bases. Taking the non-Hermitian Su-Schrieffer-Heeger model as a concrete example, a peculiar 1/2 change in biorthogonal dynamical topological order parameter, which is beyond the traditional dynamical quantum phase transitions is observed. We also find the periodicity of biorthogonal dynamical quantum phase transitions depend on whether the two-level subsystem at the critical momentum oscillates or reaches a steady state. Biorthogonal dynamical quantum phase transitions in non-Hermitian systems Zi-Xiang Hu August 1, 2023 ========================================================================= The past decades have witnessed the flourishing of non-Hermitian physics in non-conservative systems as found in a variety of physical realms including open quantum systems <cit.>, electronic systems with interactions <cit.>, and classical systems with gain or loss <cit.>. In these systems <cit.>, many novel physics and unprecedented phenomena have been explored recently, such as the exceptional points <cit.>, the non-Hermitian skin effects <cit.>, the bulk Fermi arcs <cit.>, and so on. In contrast to the Hermitian systems with real eigenvalues and orthogonal eigenstates, the eigenvalues and eigenstates in a general non-Hermitian Hamiltonian are not necessarily real and orthogonal <cit.>. To be more precise, the orthogonality of eigenstates is replaced by the notion of biorthogonality that defines the relation between the Hilbert space of states and its dual space, leading to the so-called “biorthogonal quantum mechanics" <cit.>. A direct consequence of the biorthogonality is that the transition probability between one state |ϕ⟩ to its time-evolved state |ϕ(t ) ⟩ should be carefully defined. The traditional viewpoint p=|⟨ϕ(t )|ϕ⟩| ^2 used in Hermitian systems cannot be applied to a general non-Hermitian Hamiltonian. A schematic framework to deal with this becomes an urgent topic due to the rapid development of nonequilibrium studys in non-Hermitian systems <cit.>. Dynamical quantum phase transition (DQPT) is arguably one of the most important nonequilibrium phenomena in modern many-body physics and has also been extensively studied in past decades <cit.>. It was first introduced in the Hermitian transverse field Ising model <cit.> and was generalized to mixed state <cit.>, finite temperature <cit.>, Floquet systems <cit.>, and slow quench process <cit.>. It was also observed experimentally with trapped ions <cit.>, Rydberg atoms <cit.>, ultracold atoms <cit.>, superconducting qubits <cit.>, nanomechanical and photonic systems <cit.>. The key quantity characterizing DQPT is the Loschmidt echo or dynamical fidelity, ℒ( t) ≡|⟨Ψ( 0) |Ψ( t) ⟩| ^2, quantifying the time-dependent deviation from an arbitrary initial state |Ψ( 0) ⟩. In Hermitian systems, DQPTs occur whenever the time-evolved state |Ψ( t) ⟩ becomes orthogonal to the initial state |Ψ( 0) ⟩ and the critical time t_c is defined as ℒ( t_c ) =0. Efforts have been made to generalize this concept to non-Hermitian systems, leading to many interesting predictions such as the half-integer jumps in dynamical topological order parameter (DTOP) <cit.>, but the special biorthogonality has been ignored and an enforced normalized factor has been used <cit.>. Very recently, it has been shown that the non-Hermitian systems should be described by the biorthogonal fidelity and the biorthogonal Loschmidt echo instead of the conventional counterparts in Hermitian systems, but it is limited to the parity-time symmetry cases <cit.>. A natural treatment for a general non-Hermitian Hamiltonian is still lacking, which severely limits our exploration of the richness of DQPTs in these systems. In this work, we address this issue by proposing a new theoretical framework to deal with these non-equilibrium phenomena in general non-Hermitian systems. Based on biorthogonal quantum mechanics, we reformulate the transition probability between |Ψ( 0) ⟩ and |Ψ( t) ⟩ with the biorthogonal bases and the associated states. We propose the biorthogonal dynamical quantum phase transitions and compare it with the self-normal counterpart using the non-Hermitian Su-Schrieffer-Heeger model as a concrete example. We systematically study the biorthogonal Loschmidt rate, biorthogonal DTOP, Fisher zeros and the transition probability in momentum space when the system undergoes a sudden quench. Our calculations show that there is a peculiar half-jump in biorthogonal DTOP and the periodicity of biorthogonal DQPTs depend on whether the two-level subsystem at the critical momentum oscillates or reaches a steady state. Our theory provides a general scheme to study the non-equilibrium DQPTs in non-Hermitian systems. We first review some basic properties of biorthogonal quantum mechanics in non-Hermitian systems and reformulate the probability assignment rules between two states with biorthogonal bases <cit.>. For a general non-Hermitian Hamiltonian H≠ H^†, the eigenvalue equations of H and H^† are given by H|u_n⟩ =ϵ_n|u_n⟩, ⟨ u_n|H^†=ϵ_n^∗⟨ u_n|, H^†|u_n⟩ =ϵ_n^∗|u_n⟩ , ⟨u_n| H=ϵ_n⟨u_n|, where ϵ_n is the nth eigenvalue, |u_n⟩ and |u_n⟩ are the right and left eigenstates that satisfy the completeness relation ∑_n |u_n⟩⟨ u_n|=1 and the biorthonormal relation ⟨u_m|u_n⟩=δ_m,n. Note that under this condition ⟨u_n|u_n⟩=1, eigenstates are no longer be normalized. In particular, we have ⟨ u_n|u_n⟩≥1 and ⟨ u_n|u_m⟩≠0 if n≠ m. It challenges the traditional probabilistic interpretation used in Hermitian quantum mechanics. For instance, there cannot be a `transition' from one eigenstate |u_m⟩ to another eigenstate |u_n⟩ due to the orthonormal relation ⟨ u_n|u_m⟩=0 if n≠ m in Hermitian systems. To reconcile these apparent contradictions we need the introduction of the so-called associated state and the redefinition of the inner product. For an arbitrary state |ψ⟩, its associated state |ψ⟩ is defined according to the following relation <cit.>: |ψ⟩=∑_nc_n|u_n⟩⟷|ψ⟩=∑_nc_n|u_n⟩. While the dual state ⟨ψ| =∑_nc_n^∗⟨u_n| is given by the Hermitian conjugate of |ψ⟩. The inner product between |ψ⟩ and another state |ϕ⟩=∑_md_m|u_m⟩ is thus defined as ⟨ϕ,ψ⟩≡⟨ϕ|ψ⟩=∑_m,n⟨u_m|d_m^∗c_n|u_n⟩=∑_nd_n^∗c_n. It is easy to show that the norm of a state |ψ⟩ is √(⟨ψ|ψ⟩). With these new definitions, the transition probability between |ψ⟩ and |ϕ⟩ for a biorthogonal system is given by p=⟨ψ|ϕ⟩⟨ϕ|ψ⟩/⟨ψ|ψ⟩⟨ϕ |ϕ⟩, where the denominator acts as a natural normalizing factor. In this case, p is a real number ranging from 0 to 1, which meets the requirements of probability interpretation satisfactorily. If the Hamiltonian is Hermitian H= H^†, we have |u_n⟩=|u_n⟩ and |ψ⟩=|ψ⟩. Then Eq. (<ref>) reduces to the conventional definition of the transition probability in Hermitian quantum mechanics p=⟨ψ|ϕ⟩⟨ϕ|ψ⟩/( ⟨ψ |ψ⟩⟨ϕ|ϕ⟩). Another important consequence of Eq. (<ref>) is that the projection from |ψ⟩=∑_nc_n|u_n⟩ to |u_n⟩ becomes p_n=⟨ψ|u_n⟩⟨u_n |ψ⟩/⟨ψ|ψ⟩⟨u_n|u_n⟩=c_n^∗c_n/∑_mc_m^∗c_m, satisfying the normalization condition ∑_np_n=1. So far we have considered the static aspects of the eigenvalues and eigenstates of a complex Hamiltonian. The generalization to dynamical situation is straightforward. The time evolution of an initial state |Ψ( 0) ⟩ generated by an arbitrary non-Hermitian Hamiltonian H is |Ψ( t) ⟩=e^-*i Ht|Ψ( 0) ⟩ where we have assumed a time-independent Hamiltonian for simplicity. It's worth noting that the associated state |Ψ( t) ⟩ should be given by definition Eq. (<ref>) <cit.> rather than |Ψ( t) ⟩=e^-*iH^†t|Ψ( 0) ⟩ <cit.> because the second method may lead to an incomprehensible complex probabilities (Appendix <ref>). The overlap between |Ψ( 0) ⟩ and |Ψ( t) ⟩ is characterized by the biorthogonal Loschmidt echo, ℒ( t) =⟨Ψ( 0)|Ψ( t) ⟩⟨Ψ( t)|Ψ( 0) ⟩/⟨Ψ( t) |Ψ( t) ⟩⟨Ψ( 0) |Ψ( 0) ⟩. As in Hermitian case, the biorthogonal DQPTs can be defined as ℒ( t_c ) =0 with the critical time t_c. We may examine the biorthogonal DQPT by considering a general non-Hermitian Hamiltonian H=∑_kψ_k^†H_kψ_k with H_k=d_k·σ. Here, σ is the vector of the Pauli matrices and d_k=( x_k,y_k,z_k) represent the expansion coefficients which may be complex in non-Hermitian systems. The eigenenergies of H_k are given by ±ϵ_k=±√(|d_k|^2) and the corresponding eigenstates are |u_k±⟩. Considering a quench process where the model Hamiltonian changes from d_k^i at time t=0^- to d_k^f at time t=0^+, the initial state |Ψ( 0) ⟩=⊗_k|u_k-^i⟩ which is defined as the tensor product of all |u_k-^i⟩ is evolved under the postquench Hamiltonian d_k^f. The biorthogonal Loschmidt echo can be expressed as ℒ(t)=∏_kg_k(t) with (Appendix <ref>) g_k(t)=|cos(ϵ_k^ft)-*isin(ϵ_k^ft)⟨u_k-^i|H_k^f/ϵ_k^f|u_k-^i⟩|^2/⟨u_k-^i(t)|u_k-^i(t)⟩, where |u_k-^i(t)⟩=e^-*iH^f_kt|u_k-^i⟩. To obtain a nonzero and well-defined quantity in the thermodynamic limit it is useful to consider the biorthogonal Loschmidt rate LR(t) =-lim_N→∞1/Nlnℒ(t), where N is the system size. Zeros in ℒ(t) at critical times t_c correspond to nonanalyticities (cusps or divergencies) in LR(t). If there is at least one pair of critical parameters k_c and t_c such that g_k_c(t_c)=0, then ℒ(t_c)=0. The solution of g_k_c(t_c)=0 is t_c=π/2ϵ^f_k_c(2n+1)-*i/ϵ^f_k_ctanh^-1⟨u_k_c-^i|H_k_c^f/ϵ_k_c^f|u_k_c-^i⟩, where n is an integer. If we can obtain a positive real solution t_c, the system will undergo a biorthogonal DQPT. In general, it is difficult to access k_c in a finite-size system because momentum takes quantized values. Thus the divergence of LR(t) in the thermodynamic limit becomes a cusp in a finite-site system except for some fine-tuned quench parameters or twist boundary conditions <cit.>. Furthermore, analogous to DTOP in Hermitian systems <cit.>, we can propose a biorthogonal DTOP to describe biorthogonal DQPT. The biorthogonal DTOP is defined as ν(t)=1/2π∫_0^2πdk∂_kϕ_k^G(t), where the biorthogonal geometrical phase is ϕ^G_k(t)=ϕ_k(t)-ϕ_k^dyn(t), with ϕ_k(t) being the phase of g_k(t) and the biorthogonal dynamical phase is given by (Appendix <ref>) ϕ_k^dyn(t)= -∫_0^tds⟨u_k-^i(s)|H_k^f|u_k-^i(s)⟩/⟨u_k-^i(s)|u_k-^i(s)⟩ +i/2ln⟨u_k-^i(t)|u_k-^i(t)⟩. To demonstrate that our new theoretical framework can successfully deal with non-Hermitian Hamiltonians with complex eigenvalues, we study the non-Hermitian Su-Schrieffer-Heeger model in detail below. The Hamiltonian is H =∑_j[ ( 1+η+γ/2) c_j,b^†c_j,a+( 1+η-γ/2) c_j,a^†c_j,b. . +( 1-η) c_j,a^†c_j+1,b+(1-η) c_j+1,b^†c_j,a] , where η determines the strength of intra-cell and inter-cell hopping and γ controls the degree of non-Hermiticity, as shown in Fig. <ref>(a). For periodic boundary condition, the bulk Hamiltonian gets the standard bilinear form H=∑_kψ_k^†( d_k·σ)ψ_k in momentum space, where ψ_k^†=(c_k,a^†,c_k,b^†) and d_k=( ( 1+η) +( 1-η) cos k,( 1-η) sin k+*iγ/2,0). The dispersion is ±ϵ_k=±√(x_k^2+y_k^2). It becomes gapless at the exceptional points. Thus the solution of ϵ_k=0 determines the phase boundary, i.e. k_c=0, γ=±4 and k_c=π, γ=±4η. Combining with the winding number w=1/2π∫_0^2πdk∂_k( arctany_k/x_k) <cit.>, we present the phase diagram in Fig. <ref>(b) for convenience, where we only consider the case γ>0 due to the symmetry of the phase diagram with respect to γ. For comparison, we also calculate the traditional DQPT based on self-normal Loschmidt echo ℒ( t) =|⟨Ψ( 0) |Ψ( t) ⟩| ^2, with an enforced normalized factor to avoid the negative value of Loschmidt rate <cit.>. Figure <ref>(a) presents the self-normal and biorthogonal Loschmidt rate from the same quenching process. The first feature is that the critical time at which the cusp appears is different from each other. A similar phenomenon has been observed in equilibrium quantum phase transitions, where the biorthogonal and self-normal fidelity predict different quantum critical points <cit.>. It turns out that the biorthogonal fidelity captures the correct critical point due to the special biorthoganality in non-Hermitian systems <cit.>. Thus it is also nature to believe that the non-equilibrium quantum phase transitions should be described by the biorthogonal time-dependent version of the fidelity, i.e. biorthogonal Loschmidt echo. The second feature is that there is an additional critical time t_c≈ 0.74 in biorthogonal bases. This can be seen more clearly in the DTOP as shown in Fig. <ref>(b). A peculiar half-jump of ν(t) can be observed and it may be related to the biorthoganality. To show that the half jump of ν(t) is not a fine-tune result <cit.>, we study different quenching processes by changing η but with γ fixed. Figures <ref>(a) and <ref>(b) presents five typical behaviour of the biorthogonal Loschmidt rate LR(t ) and biorthogonal DTOP ν(t), respectively. And more detailed information can be found in Appendix <ref>. The half jump of ν(t) can appear alone, periodically or accompany by an integer jump, exhibiting rich behavior in a single non-Hermitian Hamiltonian. We also find that the half-jump phenomenon occurs if and only if the prequench phases are in the middle of the phase diagram with 0 or 1/2 winding number. To better understand the biorthogonal DQPTs, we study the dynamical counterpart of Fisher zeros <cit.> in the complex time plane z_n(k)=it_n(k). The lines z_n(k) cross the imaginary time axis at a critical momentum k_c and yields a critical time t_c,n, as shown in Figs. <ref>(a) and <ref>(b). In fact, they represent two typical types of biorthogonal DQPTs, depending on whether the lines z_n(k) cut the time axis periodically or non-periodically. In order to further study these two types of biorthogonal DQPTs, we also investigate the transition probability p(k,t) between the time-evolved state |u_k-^i(t)⟩=e^-iH^f_kt|u_k-^i⟩ and another initial eigenstate |u_k+^i⟩. By expanding |u_k-^i(t)⟩ as |u_k-^i(t)⟩ = c_1|u_k-^i⟩ +c_2|u_k+^i⟩, we have p(k,t) = c_1^∗c_1/(c_1^∗c_1+c_2^∗c_2) from Eq. (<ref>). If there exists p(k,t)=1, the time-evolved state |u_k-^i(t)⟩ is biorthogonal with |u_k-^i⟩. Then the biorthogonal Loschmidt echo equals to zero because we can rewrite ℒ(t) as ℒ(t)=∏_k⟨u_k-^i|u_k-^i(t)⟩⟨u_k-^i(t)|u_k-^i⟩/⟨u_k-^i(t)|u_k-^i(t)⟩. As shown in Figs. <ref>(c) and <ref>(d), the two types of biorthogonal DQPTs exhibit two distinct behaviour of p(k,t). For the periodic biorthogonal DQPTs, p(k_c,t) oscillate periodically between 0 and 1 for some fixed critical momenta k_c. In this situation, the two-level systems of k_c dominate. Thus the periodicity of biorthogonal DQPTs may be related to the oscillations of two-level system <cit.>. On the other hand, p(k_c,t) exhibit very interesting behavior when the biorthogonal DQPTs are no longer periodic. There are many critical momenta k_c and each k_c corresponds to only one t_c,n. Before t_c,n = - i z_n(k_c), there are n local maximum in p(k_c,t). While for t ≫ t_c,n, p(k_c,t) tends to a fixed value, indicating a steady state in contrast to the oscillation behaviour. In summary, we propose a new theoretical framework to study the biorthogonal DQPTs in non-Hermitian systems based on the biorthogonal quantum mechanics. We reformulate the transition probability between one state |Ψ( 0) ⟩ and its time-evolved state |Ψ( t) ⟩ with the concept of associated state. Our scheme can handle general non-Hermitian Hamiltonian with complex eigenvalues and the normalization factors can be introduced naturally. We demonstrate our approach using the non-Hermitian Su-Schrieffer-Heeger model as a concrete example. Comparing with the self-normal cases, an peculiar 1/2 change in the biorthogonal DTOP can be observed clearly. Furthermore, our results show that the periodicity of biorthogonal DQPTs depend on whether the two-level subsystem at the critical momentum oscillates or reaches a steady state. Our work paves a way to explore the rich biorthogonal DQPTs in non-Hermitian systems. One interesting topic along this line would be the study of the mechanism of the 1/2 change in the biorthogonal DTOP. § A SIMPLE EXAMPLE TO ILLUSTRATE THE ASSOCIATE STATE Here, we effectively demonstrate distinctions between various treatment methods by utilizing concrete 2×2 non-Hermitian matrices that contain complex eigenvalues. In a simple example, we use the matrices H=( 0 4+i 2-i 0 ), with biorthogonal bases |ψ_±⟩=1/√(2)( ±√(4+i/2-i) 1), |ψ_±⟩=1/√(2)(±√(2+i/4-i) 1 ), and K=( 0 -3i -2+3i 0 ) for the time evolution. Two methods can be employed to determine its associated state: treatment Eq. (<ref>) with the time-evolved state |ψ_+(t)⟩=e^-*iKt|ψ_+⟩ denoted by subscript 1 and treatment |ψ_+(t)⟩=e^-*iK^† t|ψ_+⟩ denoted by subscript 2. When t=1, these methods lead to distinct states as represented by the following equations |ψ_+(t)⟩_1 = ( -0.614+0.103*i -0.359-0.927*i), |ψ_+(t)⟩_2 = ( -0.967-1.094*i -1.373+0.411*i). To demonstrate why the second approach fails for Hamiltonians with complex eigenvalues, we calculate probabilities when transitioning from the state |ψ_+(t)⟩ to an arbitrary state, for instance, |ϕ⟩=2|ψ_+⟩+3|ψ_-⟩, whose associated state is given by |ϕ⟩=2|ψ_+⟩+3|ψ_-⟩. By utilizing two methods above to obtain the associated states in Eq. (<ref>) and then applying Eq. (<ref>) to calculate the transition probabilities, two different probabilities p_1=0.603 and p_2=-0.372+1.118*i are obtained. As shown above, the former is a real value while the latter is an incomprehensible complex result. Thus our treatment reaches a real valued probability between 0 and 1. § DETAIL CALCULATIONS OF EQ. (<REF>) AND EQ. (<REF>) We can rewrite the biorthogonal Loschmidt echo as ℒ(t) =|⟨Ψ(0)|Ψ(t)⟩|^2/⟨Ψ(t)|Ψ(t)⟩⟨Ψ(0)|Ψ(0)⟩ =∏_k|⟨u_k-^i|u_k-^i(t)⟩|^2/⟨u_k-^i(t)|u_k-^i(t)⟩⟨u_k-^i|u_k-^i⟩ =∏_k g_k(t), where we have introduced g_k(t) =|⟨u_k-^i|u_k-^i(t)⟩|^2/⟨u_k-^i(t)|u_k-^i(t)⟩. We then calculate the numerator in g_k(t), ⟨u_k-^i|u_k-^i(t)⟩= ⟨u_k-^i|u_k-^f⟩⟨u_k-^f|e^-iH_k^ft|u_k-^f⟩⟨u_k-^f|u_k-^i⟩ +⟨u_k-^i|u_k+^f⟩⟨u_k+^f|e^-iH_k^ft|u_k+^f⟩⟨u_k+^f|u_k-^i⟩ = e^iϵ_k^ft|⟨u_k-^i|u_k-^f⟩|^2+e^-iϵ_k^ft|⟨u_k-^i|u_k+^f⟩|^2 = cos(ϵ_k^ft)-isin(ϵ_k^ft)⟨u_k-^i|H_k^f/ϵ_k^f|u_k-^i⟩. Combining the above formulas, we can obtain Eq. (<ref>). The expression of biorthogonal dynamical phase can be generalized directly from the definition in Hermitian case, ϕ_k^dyn(t) = -i∫_0^tds⟨u_k-^i(s)|/√(⟨u_k-^i(s)|u_k-^i(s)⟩)d/ds|u_k-^i(s)⟩/√(⟨u_k-^i(s)|u_k-^i(s)⟩) = -i∫_0^tds[⟨u_k-^i(s)|/√(⟨u_k-^i(s)|u_k-^i(s)⟩)d/ds|u_k-^i(s)⟩/√(⟨u_k-^i(s)|u_k-^i(s)⟩) +⟨u_k-^i(s)|u_k-^i(s)⟩/√(⟨u_k-^i(s)|u_k-^i(s)⟩)d/ds(1/√(⟨u_k-^i(s)|u_k-^i(s)⟩))] = -∫_0^tds⟨u_k-^i(s)|H_k^f|u_k-^i(s)⟩/⟨u_k-^i(s)|u_k-^i(s)⟩ +i/2ln[⟨u_k-^i(t)|u_k-^i(t)⟩]. § MORE DETAILED INFORMATION OF BIORTHOGONAL DQPTS As shown in Fig. <ref>, each phase of non-Hermitian Su-Schrieffer-Heeger model is labeled by a Roman numeral. Various quench processes and the corresponding Fisher zeros and biorthogonal DTOPs are listed in Table <ref>. The first column specifies the type of quench between different phases. The second column provides concrete parameters (η, γ) of such a quench, where the direction of the arrow “→" and “←" denotes the direction of quench. The “Fisher zeros profile" column indicates the possible number of Fisher zeros of a branch z_n(k) for a fixed n. As mentioned in the main text, a DQPT corresponds to a Fisher zero. Thus the number of Fisher zeros corresponds to the number of DQPTs in this z_n(k) branch. The final column indicates the possible value of the change in DTOP that may occur during this quench process. This work is supported by the National Natural Science Foundation of China Grants No. 12204075, 11974064, 12075040 and 12147102, the China Postdoctoral Science Foundation Grants No. 2023M730420, the fellowship of Chongqing Postdoctoral Program for Innovative Talents Grant No. CQBX202222, the Natural Science Foundation of Chongqing Grant No. CSTB2023NSCQ-MSX0953, the Chongqing Research Program of Basic Research and Frontier Technology Grants No. cstc2021jcyjmsxmX0081 and cstc2020jcyj-msxmX0890, Chongqing Talents: Exceptional Young Talents Project No. cstc2021ycjh-bgzxm0147, and the Fundamental Research Funds for the Central Universities Grant No. 2020CDJQY-Z003, 2022CDJJCLK001 and 2021CDJQY-007. 99 Rotter2009JPA I. Rotter, A non-Hermitian Hamilton operator and the physics of open quantum systems, J. Phys. A: Math. Theor. 42, 153001 (2009). Yoshida2018PRB T. Yoshida, R. Peters, and N. Kawakami, Non-Hermitian perspective of the band structure in heavy-fermion systems, Phys. Rev. B 98, 035141 (2018). Shen2018PRL H. Shen and L. Fu, Quantum Oscillation from In-Gap States and a Non-Hermitian Landau Level Problem, Phys. Rev. Lett. 121, 026403 (2018). Fu2020PRL Y. Nagai, Y. Qi, H. Isobe, V. Kozii, and L. Fu, DMFT Reveals the Non-Hermitian Topology and Fermi Arcs in Heavy-Fermion Systems, Phys. Rev. Lett. 125, 227204 (2020). Makris2008PRL K. G. Makris, R. El-Ganainy, D. N. Christodoulides, and Z. H. Musslimani, Beam Dynamics in PT Symmetric Optical Lattices, Phys. Rev. Lett. 100, 103904 (2008). Klaiman2008PRL S. Klaiman, U. Günther, and N. Moiseyev, Visualization of Branch Points in PT-Symmetric Waveguides, Phys. Rev. Lett. 101, 080402 (2008). Malzard2015PRL S. Malzard, C. Poli, and H. Schomerus, Topologically Protected Defect States in Open Photonic Systems with Non-Hermitian Charge-Conjugation and Parity-Time Symmetry, Phys. Rev. Lett. 115, 200402 (2015). Bender1998PRL C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998). Helbig2020NatPhys T. Helbig, T. Hofmann, S. Imhof, M. Abdelghany, T. Kiessling, L. W. Molenkamp, C. H. Lee, A. Szameit, M. Greiter, and R. Thomale, Generalized bulk-boundary correspondence in non-Hermitian topolectrical circuits, Nat. Phys. 16, 747 (2020). Liu2021Research S. Liu, R. Shao, S. Ma, L. Zhang, O. You, H. Wu, Y. J. Xiang, T. J. Cui, and S. Zhang, Non-Hermitian Skin Effect in a Non-Hermitian Electrical Circuit, Research 2021, 5608038 (2021). Weidemann2020Science S. Weidemann, M. Kremer, T. Helbig, T. Hofmann, A. Stegmaier, M. Greiter, R. Thomale, and A. Szameit, Topological funneling of light, Science 368, 311 (2020). Brandenbourger2019NatCommun M. Brandenbourger, X. Locsin, E. Lerner, and C. Coulais, Non-reciprocal robotic metamaterials, Nat. Commun. 10, 4608 (2019). Ghatak2020PNAS A. Ghatak, M. Brandenbourger, J. V. Wezel, and C. Coulais, Observation of non-Hermitian topology and its bulk-edge correspondence in an active mechanical metamaterial, Proc. Natl. Acad. Sci. USA 117, 29561 (2020). XZhang2021NatCommun X. Zhang, Y. Tian, J.-H. Jiang, M.-H. Lu, and Y.-F. Chen, Observation of higher-order non-Hermitian skin effect, Nat. Commun. 12, 5377 (2021). LZhang2021NatCommun L. Zhang, Y. Yang, Y. Ge, Y.-J. Guan, Q. Chen, Q. Yan, F. Chen, R. Xi, Y. Li, D. Jia, S.- Q. Yuan, H.-X. Sun, H. Chen, and B. Zhang, Acoustic non-Hermitian skin effect from twisted winding topology, Nat. Commun. 12, 6297 (2021). Xiao2020NatPhys L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian bulk-boundary correspondence in quantum dynamics, Nat. Phys. 16, 761 (2020). Wang2021JOpt H. Wang, X. Zhang, J. Hua, D. Lei, M. Lu, and Y. Chen, Topological physics of non-Hermitian optics and photonics: a review, J. Opt. 23, 123001 (2021). ZhongWang2018PRL S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018). Sato2019PRX K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, Symmetry and Topology in Non-Hermitian Physics, Phys. Rev. X 9, 041015 (2019). Yokomizo2019PRL K. Yokomizo and S. Murakami, Non-Bloch Band Theory of Non-Hermitian Systems, Phys. Rev. Lett. 123, 066404 (2019). Shen2018PRL2 H. Shen, B. Zhen, and L. Fu, Topological Band Theory for Non-Hermitian Hamiltonians, Phys. Rev. Lett. 120, 146402 (2018). Hodaei2017Nature H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides, and M. Khajavikhan, Enhanced sensitivity at higher-order exceptional points, Nature (London) 548, 187 (2017). Bergholtz2021RMP E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021). Zhang2020PRL K. Zhang, Z. Yang, and C. Fang, Correspondence between Winding Numbers and Skin Modes in Non-Hermitian Systems, Phys. Rev. Lett. 125, 126402 (2020). XiujuanZhang2022AdvPX X. Zhang, T. Zhang, M.-H. Lu, and Y.-F. Chen, A review on non-Hermitian skin effect, Adv. Phys.: X 7, 2109431 (2022). Zhou2018Science H. Zhou, C. Peng, Y. Yoon, C. W. Hsu, K. A. Nelson, L. Fu, J. D. Joannopoulos, M. Soljacic, and B. Zhen, Observation of bulk Fermi arc and polarization half charge from paired exceptional points, Science, 359, 1009 (2018). Ueda2020AdvPhys Y. Ashida, Z. Gong, and M. Ueda, Non-Hermitian physics, Adv. Phys. 69, 249 (2020). Brody2014JPA D. C. Brody, Biorthogonal quantum mechanics, J. Phys. A: Math. Theor. 47, 035305 (2014). Emil2018PRL F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Biorthogonal Bulk-Boundary Correspondence in Non-Hermitian Systems, Phys. Rev. Lett. 121, 026808 (2018). Emil2019PRBR E. Edvardsson, F. K. Kunst, and E. J. Bergholtz, Non-Hermitian extensions of higher-order topological phases and their biorthogonal bulk-boundary correspondence, Phys. Rev. B 99, 081302(R) (2019). Emil2020PRR E. Edvardsson, F. K. Kunst, T. Yoshida, and E. J. Bergholtz, Phase transitions and generalized biorthogonal polarization in non-Hermitian systems, Phys. Rev. Res. 2, 043046 (2020). ShuChen2021PRA Z. Xu and S. Chen, Dynamical evolution in a one-dimensional incommensurate lattice with 𝒫𝒯 symmetry, Phys. Rev. A 103, 043325 (2021). HuiZhai2020NatPhys L. Pan, X. Chen, Y. Chen and H. Zhai, Non-Hermitian linear response theory, Nat. Phys. 16, 767 (2020). Lin2022NPJQI Z. Lin, L. Zhang, X. Long, Y.-a. Fan, Y. Li, K. Tang, J. Li, X. Nie, T. Xin, X.-J. Liu, and D. Lu, Experimental quantum simulation of non-Hermitian dynamical topological states using stochastic Schrödinger equation, npj Quantum Inf. 8, 77 (2022). Liu2023PRA H. Liu, X. Yang, K. Tang, L. Che, X. Nie, T. Xin, J. Li, and D. Lu, Practical quantum simulation of small-scale non-Hermitian dynamics, Phys. Rev. A 107, 062608 (2023). Hauke2022PRXQ K. T. Geier and P. Hauke, From Non-Hermitian Linear Response to Dynamical Correlations and Fluctuation-Dissipation Relations in Quantum Many-Body Systems, PRX Quantum 3, 030308 (2022). Kawabata2023PRX K. Kawabata, T. Numasawa, and S. Ryu, Entanglement Phase Transition Induced by the Non-Hermitian Skin Effect, Phys. Rev. X 13, 021007 (2023). Yoshimura2020PRB T. Yoshimura, K. Bidzhiev, and H. Saleur, Non-Hermitian quantum impurity systems in and out of equilibrium: Noninteracting case, Phys. Rev. B 102, 125124 (2020). Mcdonald2022PRB A. McDonald, R. Hanai, and A. A. Clerk, Nonequilibrium stationary states of quantum non-Hermitian lattice models, Phys. Rev. B 105, 064302 (2022). Yin2022PRB L.-J. Zhai, G.-Y. Huang, and S. Yin, Nonequilibrium dynamics of the localization-delocalization transition in the non-Hermitian Aubry-André model, Phys. Rev. B 106, 014204 (2022). Agarwal2022arXiv K. D. Agarwal, T. K. Konar, L. G. C. Lakkaraju, and A. Sen, Detecting Exceptional Point through Dynamics in Non-Hermitian Systems,arXiv:2212.12403. Agarwal2023arXiv K. D. Agarwal, T. K. Konar, L. G. C. Lakkaraju, and A. Sen, Recognizing critical lines via entanglement in non-Hermitian systems, arXiv:2305.08374. Roubeas2023JHEP A. S. M.-Roubeas, F. Roccati, J. Cornelius, Z. Xu, A. Chenu, and A. D. Campo, Non-Hermitian Hamiltonian deformations in quantum mechanics, J. High Energ. Phys. 2023, 60 (2023). Heyl2013PRL M. Heyl, A. Polkovnikov, and S. Kehrein, Dynamical Quantum Phase Transitions in the Transverse-Field Ising Model, Phys. Rev. Lett. 110, 135704 (2013). Budich2016PRB J. C. Budich and M. Heyl, Dynamical topological order parameters far from equilibrium, Phys. Rev. B 93, 085416 (2016). Dong2019PRB J.-J. Dong and Y.-F. Yang, Functional field integral approach to quantum work, Phys. Rev. B 100, 035124 (2019). Nie2020PRL X. Nie, B.-B. Wei, X. Chen, Z. Zhang, X. Zhao, C. Qiu, Y. Tian, Y. Ji, T. Xin, D. Lu, and J. Li, Experimental Observation of Equilibrium and Dynamical Quantum Phase Transitions via Out-of-Time-Ordered Correlators, Phys. Rev. Lett. 124, 250601 (2020). ShuChen2023PRB Y. Zeng, B. Zhou, and S. Chen, Dynamical singularity of the rate function for quench dynamics in finite-size quantum systems, Phys. Rev. B 107, 134302 (2023). Heyl2017PRB M. Heyl and J. C. Budich, Dynamical topological quantum phase transitions for mixed states, Phys. Rev. B 96, 180304(R) (2017). Bhattacharya2017PRB U. Bhattacharya, S. Bandyopadhyay, and A. Dutta, Mixed state dynamical quantum phase transitions, Phys. Rev. B 96, 180303(R) (2017). Abeling2016PRB N. O. Abeling and S. Kehrein, Quantum quench dynamics in the transverse field Ising model at nonzero temperatures, Phys. Rev. B 93, 104302 (2016). Sedlmayr2018PRB N. Sedlmayr, M. Fleischhauer, and J. Sirker, Fate of dynamical phase transitions at finite temperatures and in open systems, Phys. Rev. B 97, 045147 (2018). Halimeh2018PRL J. Lang, B. Frank, and J. C. Halimeh, Dynamical Quantum Phase Transitions: A Geometric Picture, Phys. Rev. Lett. 121, 130603 (2018). Halimeh2018PRB J. Lang, B. Frank, and J. C. Halimeh, Concurrence of dynamical phase transitions at finite temperature in the fully connected transverse-field Ising model, Phys. Rev. B 97, 174401 (2018). Yang2019PRB K. Yang, L. Zhou, W. Ma, X. Kong, P. Wang, X. Qin, X. Rong, Y. Wang, F. Shi, J. Gong, and J. Du, Floquet dynamical quantum phase transitions, Phys. Rev. B 100, 085308 (2019). Naji2022PRA J. Naji, M. Jafari, R. Jafari, and A. Akbari, Dissipative Floquet Dynamical Quantum Phase Transition, Phys. Rev. A 105, 022220 (2022). Jafari2021PRA R. Jafari and A. Akbari, Floquet dynamical phase transition and entanglement spectrum, Phys. Rev. A 103, 012204 (2021). Jafari2022PRB R. Jafari, A. Akbari, U. Mishra, and H. Johannesson, Floquet dynamical quantum phase transitions under synchronized periodic driving, Phys. Rev. B 105, 094311 (2022). Sharma2016PRB S. Sharma, U. Divakaran, A. Polkovnikov, and A. Dutta, Slow quenches in a quantum Ising chain: Dynamical phase transitions and topology, Phys. Rev. B 93, 144306 (2016). Shen2017PRL P. Jurcevic, H. Shen, P. Hauke, C. Maier, T. Brydges, C. Hempel, B. P. Lanyon, M. Heyl, R. Blatt, and C. F. Roos, Direct Observation of Dynamical Quantum Phase Transitions in an Interacting Many-Body System, Phys. Rev. Lett. 119, 080501 (2017). Zhang2017Nature J. Zhang, G. Pagano, P. W. Hess, A. Kyprianidis, P. Becker, H. Kaplan, A. V. Gorshkov, Z.-X. Gong, and C. Monroe, Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator, Nature (London) 551, 601 (2017). Bernien2017Nature H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletic, and M. D. Lukin, Probing many-body dynamics on a 51-atom quantum simulator, Nature (London) 551, 579 (2017). Sengstock2018NatPhys N. Fläschner, D. Vogel, M. Tarnowski, B. S. Rem, D.-S. Lühmann, M. Heyl, J. C. Budich, L. Mathey, K. Sengstock, and C. Weitenberg, Observation of dynamical vortices after quenches in a system with topology, Nat. Phys. 14, 265 (2018). HengFan2019PRAppl X.-Y. Guo, C. Yang, Y. Zeng, Y. Peng, H.-K. Li, H. Deng, Y.-R. Jin, S. Chen, D. Zheng, and H. Fan, Observation of a Dynamical Quantum Phase Transition by a Superconducting Qubit Simulation, Phys. Rev. Appl. 11, 044080 (2019). JiangfengDu2019PRB T. Tian, Y. Ke, L. Zhang, S. Lin, Z. Shi, P. Huang, C. Lee, and J. Du, Observation of dynamical phase transitions in a topological nanomechanical system, Phys. Rev. B 100, 024310 (2019). PengXue2019PRL K. Wang, X. Qiu, L. Xiao, X. Zhan, Z. Bian, W. Yi, and P. Xue, Simulating Dynamic Quantum Phase Transitions in Photonic Quantum Walks, Phys. Rev. Lett. 122, 020501 (2019). GuangcanGuo2020LSA X.-Y. Xu, Q.-Q. Wang, M. Heyl, J. C. Budich, W.-W. Pan, Z. Chen, M. Jan, K. Sun, J.-S. Xu, Y.-J. Han, C.-F. Li, and G.-C. Guo, Measuring a dynamical topological order parameter in quantum walks, Light Sci. Appl. 9, 7 (2020). Mondal2022PRB D. Mondal and T. Nag, Anomaly in dynamical quantum phase transition in non-Hermitian system with extended gapless phases, Phys. Rev. B 106, 054308 (2022). Zhou2018PRA L. Zhou, Q.-H. Wang, H. Wang, and J. Gong, Dynamical quantum phase transitions in non-Hermitian lattices, Phys. Rev. A 98, 022129 (2018). Zhou2021NJP L. Zhou and Q. Du, Non-Hermitian topological phases and dynamical quantum phase transitions: a generic connection, New J. Phys. 23, 063041 (2021). Mondal2022arxiv D. Mondal and T. Nag, Finite temperature dynamical quantum phase transition in a non-Hermitian system, arXiv:2212.05839. Sun2022FP G. Sun, J.-C. Tang, and S.-P. Kou, Biorthogonal quantum criticality in non-Hermitian many-body systems, Front. Phys. 17, 33502 (2022). Tang2022EPL J.-C. Tang, S.-P. Kou, and G. Sun, Dynamical scaling of Loschmidt echo in non-Hermitian systems, Europhys. Lett. 137, 40001 (2022). Vajna2015PRB S. Vajna and B. Dóra, Topological classification of dynamical phase transitions, Phys. Rev. B 91, 155127 (2015). Yin2018PRA C. Yin, H. Jiang, L. Li, R. Lü, and S. Chen, Geometrical meaning of winding number and its characterization of topological phases in one-dimensional chiral non-Hermitian systems, Phys. Rev. A 97, 052115 (2018). Ding2020PRBR C. Ding, Dynamical quantum phase transition from a critical quantum quench, Phys. Rev. B 102, 060409(R) (2020). Fisher1967RPP M. E. Fisher, The theory of equilibrium critical phenomena, Rep. Prog. Phys. 30, 615 (1967). Yang1952PR C. N. Yang and T. D. Lee, Statistical Theory of Equations of State and Phase Transitions. I. Theory of Condensation, Phys. Rev. 87, 404 (1952). Lee1952PR T. D. Lee and C. N. Yang, Statistical Theory of Equations of State and Phase Transitions. II. Lattice Gas and Ising Model, Phys. Rev. 87, 410 (1952). Zakrzewski2022arXiv J. Zakrzewski, Dynamical quantum phase transitions from quantum optics perspective, arXiv:2204.09454.
http://arxiv.org/abs/2307.03346v1
20230707013851
Limit theorems for the site frequency spectrum of neutral mutations in an exponentially growing population
[ "Einar Bjarki Gunnarsson", "Kevin Leder", "Xuanming Zhang" ]
math.PR
[ "math.PR", "q-bio.PE", "60J85, 60F15, 92D25, 92B05" ]
=-0.9in =-0.4in =6.3in =9in theoremTheorem acknowledgement[theorem]Acknowledgement algorithm[theorem]Algorithm axiom[theorem]Axiom case[theorem]Case claim[theorem]Claim conclusion[theorem]Conclusion condition[theorem]Condition conjecture[theorem]Conjecture corollaryCorollary criterion[theorem]Criterion definitionDefinition example[theorem]Example exercise[theorem]Exercise lemmaLemma *corollary*Corollary notation[theorem]Notational Convention problem[theorem]Problem propositionProposition remarkRemark solution[theorem]Solution ϵ□ height .4pt width .4pt height 5pt 5pt width .4pt height .4pt□ℝℤ d = var cov -0.2em converted_graphics/ / Limit theorems for the site frequency spectrum of neutral mutations in an exponentially growing population Einar Bjarki Gunnarsson^1 Kevin Leder^2Xuanming Zhang^2 ^1School of Mathematics, University of Minnesota, Twin Cities, MN 55455, USA. ^2Department of Industrial and Systems Engineering, University of Minnesota, Twin Cities, MN 55455, USA. ================================================================================================================================================================================================= The site frequency spectrum (SFS) is a widely used summary statistic of genomic data, offering a simple means of inferring the evolutionary history of a population. Motivated by recent evidence for the role of neutral evolution in cancer, we examine the SFS of neutral mutations in an exponentially growing population. Whereas recent work has focused on the mean behavior of the SFS in this scenario, here, we investigate the first-order asymptotics of the underlying stochastic process. Using branching process techniques, we show that the SFS of a Galton-Watson process evaluated at a fixed time converges almost surely to a random limit. We also show that the SFS evaluated at the stochastic time at which the population first reaches a certain size converges in probability to a constant. Finally, we illustrate how our results can be used to construct consistent estimators for the extinction probability and the effective mutation rate of a birth-death process. Keywords: Site frequency spectrum; Neutral evolution; Infinite sites model; Branching processes; Convergence of stochastic processes. MSC2020 Classification: 60J85, 60F15, 92D25, 92B05. § INTRODUCTION The site frequency spectrum (SFS) is a popular summary statistic of genomic data, recording the frequencies of mutations within a given population or population sample. For the case of a large constant-sized population and selectively neutral mutations, the SFS has given rise to several estimators of the rate of mutation accumulation within the population, and these estimators have formed the basis of many statistical tests of neutral evolution vs. evolution under selection <cit.>. In this way, the SFS has provided a simple means of understanding the rate and mode of evolution in a population using genomic data. Motivated by the uncontrolled growth of cancer cell populations, and the mounting evidence for the role of neutral evolution in cancer <cit.>, several authors have recently studied the SFS of neutral mutations in an exponentially growing population. Durrett <cit.> considered a supercritical birth-death process, in which cells live for an exponentially distributed time and then divide or die. He showed that in the large-time limit, the expected number of mutations found at a frequency ≥ f amongst cells with infinite lineage follows a 1/f power law with 0<f<1. Similar results were obtained by Bozic et al. <cit.> and in a deterministic setting by Williams et al. <cit.>. In the aforementioned work, Durrett also derived an approximation for the expected SFS of a small random sample taken from the population <cit.>. Further small sample results have been derived using both branching process and coalescence techniques and they have been compared with Durrett's result in <cit.>. In <cit.>, we derived exact expressions for the SFS of neutral mutations in a supercritical birth-death process, both for cells with infinite lineage and for the total cell population, evaluated either at a fixed time (fixed-time SFS) or at the stochastic time at which the population first reaches a given size (fixed-size SFS). More recently, the effect of selective mutations on the expected SFS has been investigated by Tung and Durrett <cit.> and Bonnet and Leman <cit.>. The latter work considers the setting of a drug-sensitive tumor which decays exponentially under treatment, with cells randomly acquiring resistance which enables them to grow exponentially under treatment. Whereas the aforementioned works have focused on the mean behavior of the SFS, here, we are interested in the asymptotic behavior of the underlying stochastic process. Using the framework of coalescent point processes, Lambert <cit.> derived a strong law of large numbers for the SFS of neutral mutations in a population sample, where the sample is ranked in such a way that coalescence times between consecutive individuals are i.i.d. Later works by Lambert <cit.>, Johnston <cit.> and Harris et al. <cit.> characterized the joint distribution of coalescence times for a uniformly drawn sample from a continuous-time Galton-Watson process. Building on these works, Johnson et al. <cit.> derived limit distributions for the total lengths of internal and external branches in the genealogical tree of a birth-death process. Schweinsberg and Shuai <cit.> extended this analysis to branches supporting exactly k leaves, which under a constant mutation rate characterizes the SFS of a uniformly drawn sample. For a supercritical birth-death process, the authors established both a weak law of large numbers and the asymptotic normality of branch lengths in the limit of a large sample, assuming that the sample is sufficiently small compared to the expected population size at the sampling time. In this work, instead of considering a sample from the population using coalescence techniques, we will investigate the first-order asymptotics for the SFS of the total population using branching process techniques. We establish results both for the fixed-time and fixed-size SFS under the infinite sites model of mutation, where each new mutation is assumed to be unique <cit.>. Cheek and Antal recently studied a finite sites model in <cit.> (see also <cit.>), where each genetic site is allowed to mutate back and forth between the four nucleotides A,C,G,T. With the understanding that a site is mutated if its nucleotide differs from the nucleotide of the initial individual, the authors investigated the SFS of a birth-death process stopped at a certain size, both for mutations observed in a certain number and in a certain fraction of individuals. They used a limiting regime where the population size is sent to infinity, mutation rate is sent to 0, and the number of genetic sites is sent to infinity. In contrast, we will assume a constant mutation rate under the infinite sites model (with no back mutations), and send either the fixed time or the fixed size at which the population is observed to infinity. Our results are derived for a supercritical Galton-Watson process in continuous time, where each individual acquires neutral mutations at a constant rate ν>0. Let Z_0(t) denote the size of the population at time t, λ>0 denote the net growth rate of the population, τ_N denote the time at which the population first reaches size N, and S_j(t) denote the number of mutations found in j ≥ 1 individuals at time t. Our main result, Theorem <ref>, characterizes the first-order behavior of e^-λ tS_j(t) as t →∞ (fixed-time result) and N^-1S_j(τ_N) as N →∞ (fixed-size result). To prove the fixed-time result, the key idea is to decompose (S_j(t))_t ≥ 0 into a difference of two increasing processes (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0. These processes count the total number of instances that a mutation reaches and leaves frequency j, respectively, up until time t. Using the limiting behavior of Z_0(t) as t →∞, we construct large-time approximations for the two processes (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0. We then establish exponential L^1 error bounds on these approximations, which imply convergence in probability. Finally, by adapting an argument of Harris (Theorem 21.1 of <cit.>), we use the exponential error bounds and the fact that (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0 are increasing processes to show that e^-λ tS_j,+(t) and e^-λ t S_j,-(t) converge almost surely to their approximations. This in turn gives almost sure convergence for e^-λ tS_j(t) as t →∞. The fixed-size result is obtained by combining the fixed-time result with an approximation result for τ_N, given by Proposition <ref>. Since we are only able to establish the approximation for τ_N in probability, the result for N^-1 S_j(τ_N) as N →∞ is given in probability. Finally, we establish analogous fixed-time and fixed-size convergence results for M(t) = ∑_j=1^∞ S_j(t), the total number of mutations present at time t, in Proposition <ref>. All results are given conditional on nonextinction of the population. The rest of the paper is organized as follows. Section 2 introduces our branching process model and establishes the relevant notation. Section 3 presents our results, including explicit expressions for the birth-death process. Section 4 outlines the proof of the main result, Theorem <ref>. Section 5 constructs consistent estimators for the extinction probability and effective mutation rate of the birth-death process. Finally, the proofs of the remaining results can be found in Section 6. § MODEL §.§ Branching process model with neutral mutations We consider a Galton-Watson branching process (Z_0(t))_t ≥ 0, started with a single individual at time 0, Z_0(0)=1, where the lifetimes of individuals are exponentially distributed with mean 1/a>0. At the end of an individual's lifetime, it produces offspring according to the distribution (u_k)_k ≥ 0, where u_k is the probability that k offspring are produced. We define m := ∑_k=0^∞ ku_k as the mean number of offspring per death event and assume that the offspring distribution has a finite third moment, ∑_k=0^∞ k^3u_k<∞. Each individual, over its lifetime, accumulates neutral mutations at (exponential) rate ν>0. We assume the infinite sites model of mutation, where each new mutation is assumed to be unique. Throughout, we consider the case m>1 of a supercritical process. The net growth rate of the population is then λ=a(m-1)>0, with E[Z_0(t)]=e^λ t for t ≥ 0. We will be primarily interested in analyzing the process conditional on long-term survival of the population. We define the event of nonextinction of the population as Ω_∞ := {Z_0(t)>0 for all t > 0}. We also define the probability of eventual extinction as p := P(Ω_∞^c) = P(Z_0(t)=0 for some t>0}, and the corresponding survival probability as q := P(Ω_∞). For N ≥ 1, we define τ_N as the time at which the population first reaches size N, τ_N := inf{t≥0: Z_0(t) ≥ N}, with the convention that inf∅ = ∞. Note that on Ω_∞, τ_N<∞ almost surely. Also note that if u_k>0 for some k>2, it is possible that Z_0(τ_N)>N. We finally define p_i,j(t) := P(Z_0(t)=j|Z_0(0)=i) as the probability of transitioning from i to j individuals in t time units. For the baseline case Z_0(0)=1, we simplify the notation to p_j(t) := p_1,j(t). §.§ Special case: Birth-death process An important special case is that of the birth-death process, where u_2 > u_0 ≥ 0 and u_0+u_2=1. In this process, an individual at the end of its lifetime either dies without producing offspring or produces two offspring. At each death event, the population therefore either reduces or increases in size by one individual. The birth-death process is for example relevant to the population dynamics of cancer cell populations (tumors) and bacteria. In this case, the probability of eventual extinction can be computed explicitly as p = u_0/u_2 and the survival probability as q = 1-u_0/u_2<cit.>. Furthermore, the probability mass function j ↦ p_j(t) has an explicit expression for each t ≥ 0, which is given by expression (<ref>) in Section <ref>. This will enable us to derive explicit limits for the site frequency spectrum of the birth-death process, see Corollary <ref> in Section <ref>. §.§ Asymptotic behavior We note that (e^-λ tZ_0(t))_t ≥ 0 is a nonnegative martingale with respect to the natural filtration ℱ_t := σ(Z_0(s); s≤ t). Thus, there exists a random variable Y such that e^-λ tZ_0(t)→ Y almost surely t→∞. By Theorem 2 in Section III.7 of <cit.>, Y𝒟=pδ_0+qξ, where p and q are the extinction and survival probabilities of the population, respectively, δ_0 is a point mass at 0, and ξ is a random variable on (0,∞) with a strictly positive continuous density function and mean 1/q. Since we assume that the offspring distribution has a finite second moment we know that E[(Z_0(t))^2] = O(e^2λ t) by Chapter III.4 of <cit.> or Lemma 5 of <cit.>, hence (e^-λ tZ_0(t))_t ≥ 0 is uniformly integrable and E[Y|ℱ_t]=e^-λ tZ_0(t). Based on the large-time approximation Z_0(t) ≈ Y e^λ t, for N ≥ 1, we define an approximation to the hitting time τ_N defined in (<ref>) as follows: t_N := inf{t≥0: Ye^λ t = N}. In Proposition <ref>, we show that conditional on Ω_∞, τ_N-t_N → 0 in probability as N →∞. §.§ Site frequency spectrum In the model, each individual accumulates neutral mutations at rate ν>0. For t>0, enumerate the mutations that occur up until time t as 1,…,N_t, and define M_t := {1,…,N_t} as the set of mutations generated up until time t. For i ∈ M_t and s ≤ t, let C^i(s) denote the number of individuals at time s that carry mutation i, with C^i(s) = 0 before mutation i occurs. The number of mutations present in j individuals at time t is then given by S_j(t) :=∑_i∈ℳ_t1_{C^i(t)=j}. The vector (S_j(t))_j ≥ 1 is the site frequency spectrum (SFS) of the neutral mutations at time t. We also define the total number of mutations present at time t as M(t) := ∑_j=1^∞ S_j(t). The goal of this paper is to establish first-order limit theorems for S_j(t) and M(t), evaluated either at the fixed time t as t →∞ or at the random time τ_N as N →∞. § RESULTS §.§ General case Our main result, Theorem <ref>, provides large-time and large-size first-order asymptotics for the SFS conditional on nonextinction. For the fixed-time SFS, we establish almost sure convergence, while for the fixed-size SFS, we establish convergence in probability. A proof sketch is given in Section <ref> and the proof details are carried out in Sections <ref>–<ref>. * Conditional on Ω_∞, lim_t →∞ e^-λ t S_j(t) = ν Y ∫_0^∞ e^-λ s p_j(s) ds, j ≥ 1, almost surely. Equivalently, with r_N := (1/λ)log(q N), X := qY and [X|Ω_∞]=1, lim_N →∞ N^-1 S_j(r_N) = ν X ∫_0^∞ e^-λ s p_j(s) ds, j ≥ 1, almost surely. * Conditional on Ω_∞, lim_N →∞ N^-1 S_j(τ_N) = ν∫_0^∞ e^-λ s p_j(s) ds, j ≥ 1, in probability. Section <ref> and Sections <ref>–<ref>. The main difference between the fixed-time result (<ref>) and the fixed-size result (<ref>) is that the limit in (<ref>) is a random variable while it is constant in (<ref>). The reason is that the population size at a large, fixed time t is dependent on the limiting random variable Y in e^-λ t Z_0(t) → Y, while the population size at time τ_N is always approximately N. In expression (<ref>), the fixed-time result is viewed at the time r_N defined so that lim_N →∞ N^-1 E[Z_0(r_N)|Ω_∞] = 1. The point is to show that when the result in (<ref>) is viewed at a fixed time comparable to τ_N, the mean of the limiting random variable becomes equal to the fixed-size limit in (<ref>). To establish the fixed-size result (<ref>), we prove a secondary approximation result for the hitting time τ_N defined in (<ref>). The result, stated as Proposition <ref>, shows that conditional on Ω_∞, τ_N is equal to the approximation t_N defined in (<ref>) up to an O(1) error. The proof involves relatively simple calculations, given in Section <ref>. For any ε>0, lim_N→∞P(|τ_N-t_N|>|Ω_∞)=0. Section <ref>. The proof of the fixed-size result (<ref>) combines the fixed-time result (<ref>) with Proposition <ref>, as is discussed in Section <ref>. Since we are only able to establish the approximation for τ_N in probability, the fixed-size result (<ref>) is given in probability. An almost sure version of Proposition <ref> would immediately imply an almost sure version of (<ref>). Finally, a simpler version of the argument used to prove Theorem <ref> can be used to prove analogous limit theorems for the total number of mutations at time t, M(t). * Conditional on Ω_∞, lim_t →∞ e^-λ t M(t) = ν Y ∫_0^∞ e^-λ s (1-p_0(s)) ds, almost surely. * Conditional on Ω_∞, lim_N →∞ N^-1 M(τ_N) = ν∫_0^∞ e^-λ s (1-p_0(s)) ds, in probability. Section <ref>. By combining the results of Theorem <ref> and Proposition <ref>, we obtain the following limits for the proportion of mutations found in j ≥ 1 individuals: lim_t →∞S_j(t)/M(t) = lim_N →∞S_j(τ_N)/M(τ_N)= ∫_0^∞ e^-λ s p_j(s) ds/∫_0^∞ e^-λ s (1-p_0(s)) ds, j ≥ 1, where the fixed-time limit applies almost surely and the fixed-size limit in probability. In the application Section <ref>, we will also be interested in the proportion of mutations found in j ≥ 1 individuals out of all mutations found in ≥ j individuals. If we define M_j(t) := ∑_k ≥ j S_j(t), j ≥ 1, t ≥ 0, as the total number of mutations found in ≥ j individuals, this proportion is given by lim_t →∞S_j(t)/M_j(t) = lim_N →∞S_j(τ_N)/M_j(τ_N)= ∫_0^∞ e^-λ s p_j(s) ds/∫_0^∞ e^-λ s(∑_k=j^∞ p_k(s)) ds, j ≥ 1, since limit theorems for M_j(t) follow from Theorem <ref> and Proposition <ref> by writing M_j(t) = M(t) - ∑_k=1^j-1 S_k(t). Note that for both proportions, the fixed-time and fixed-size limits are the same, as the variability in population size at a fixed time has been removed. Also note that both proportions are independent of the mutation rate ν. In Section <ref>, we show that for the birth-death process, these properties enable us to define a consistent estimator for the extinction probability p which applies both to the fixed-time and fixed-size SFS. §.§ Special case: Birth-death process For the special case of the birth-death process, we are able to derive explicit expressions for the limits in Theorem <ref> and Proposition <ref>, as we demonstrate in the following corollary. For the birth-death process, conditional on Ω_∞, * the random variable Y in Theorem <ref> has the exponential distribution with mean 1/q, and the fixed-time result (<ref>) can be written explicitly as lim_t →∞ e^-λ t S_j(t) = ν q Y/λ∫_0^1 (1-py)^-1 (1-y) y^j-1 dy = ν q Y/λ∑_k=0^∞p^k/(j+k)(j+k+1), j≥ 1. For the special case p=0 of a pure-birth or Yule process, lim_t →∞ e^-λ t S_j(t) = ν Y/λ1/j(j+1). * the fixed-size result (<ref>) can be written explicitly as lim_N →∞ N^-1 S_j(τ_N) = ν q/λ∫_0^1 (1-py)^-1 (1-y) y^j-1 dy = ν q/λ∑_k=0^∞p^k/(j+k)(j+k+1), j≥ 1. For the pure-birth or Yule process, lim_N →∞ N^-1 S_j(τ_N) = ν/λ1/j(j+1). * the fixed-time result (<ref>) can be written explicitly as lim_t →∞ e^-λ t M(t) = ν Yλ, p=0, - ν q log(q) Yλ p, 0<p<1. * the fixed-size result (<ref>) can be written explicitly as lim_N →∞ N^-1 M(τ_N) = νλ, p=0, - ν q log(q)λ p, 0<p<1. Section <ref>. Similarly, the proportion of mutations found in j ≥ 1 individuals, appearing in expression (<ref>), can be written explicitly as ∫_0^∞ e^-λ s p_j(s) ds/∫_0^∞ e^-λ s (1-p_0(s)) ds = 1j(j+1), p=0, -p/log(q)∫_0^1 (1-p y)^-1 (1-y) y^j-1 dy, 0<p<1, and the proportion of mutations in j individuals out of all mutations in ≥ j individuals, appearing in expression (<ref>), can be written as φ_j(p) := ∫_0^∞ e^-λ s p_j(s) ds/∫_0^∞ e^-λ s(∑_k=j^∞ p_k(s)) ds = 1j+1, p=0, 1-∫_0^1 (1-p y)^-1 y^j dy∫_0^1 (1-py)^-1 y^j-1 dy, 0<p<1, see Section <ref>. Note that expressions (<ref>) and (<ref>) give the same proportion for j=1. It can be shown that for any j ≥ 1, φ_j(p) is strictly decreasing in p (Section <ref>). In Section <ref>, we use this fact to develop an estimator for the extinction probability p. We showed in expression (C.1) of <cit.> that for p=0, E[S_j(τ_N)] = ν N/λ·1/j(j+1), j=2,…,N-1. In other words, the fixed-size result (<ref>) holds in the mean even for finite values of N, excluding boundary effects at j=1 and j=N. § PROOF OF THEOREM <REF> In this section, we sketch the proof of the main result, Theorem <ref>. Proving the fixed-time result (<ref>) represents most of the work, which is discussed in Sections <ref> to <ref>. The main idea is to write the site-frequency spectrum process (S_j(t))_t ≥ 0 as a difference of two increasing processes in time, and to prove limit theorems for the increasing processes. The fixed-size result (<ref>) follows easily from fixed-time result (<ref>) and Proposition <ref> via the continuous mapping theorem, as is discussed in Section <ref>. §.§ Decomposition into increasing processes S_j,+(t) and S_j,-(t) Fix j ≥ 1. The key idea of the proof of the fixed-time result (<ref>) is to decompose the process (S_j(t))_t ≥ 0 into a difference of two increasing processes (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0. To describe these processes, we first need to establish some notation. Recall that for mutation i ∈ M_t and s ≤ t, C^i(s) is the size of the clone containing mutation i at time s, meaning the number of individuals carrying mutation i at time s. Set τ_j,-^i(0) := 0 and define recursively for k ≥ 1, τ_j,+^i(k) := inf{s>τ_j,-^i(k-1) : C^i(s) = j}, τ_j,-^i(k) := inf{s>τ_j,+^i(k) : C^i(s) ≠ j}. Note that τ_j,+^i(k) is the k-th time at which the clone containing mutation i reaches or enters size j, and τ_j,-^i(k) is the k-th time at which it leaves or exits size j. Next, define I_j,+^i(t) := ∑_ℓ=1^∞ 1_{τ_j,+^i(ℓ) ≤ t}, I_j,-^i(t) := ∑_ℓ=1^∞ 1_{τ_j,-^i(ℓ) ≤ t}, as the number of times the clone containing mutation i enters and exits size j, respectively, up until time t. Then, for each k ≥ 1, define the increasing processes (S_j,+^k(t))_t ≥ 0 and (S_j,-^k(t))_t ≥ 0 by S_j,+^k(t) := ∑_i ∈ M_t 1_{I_j,+^i(t)≥ k}, S_j,-^k(t) := ∑_i ∈ M_t 1_{I_j,-^i(t) ≥ k}. These processes keep track of the number of mutations in M_t whose clones enter and exit size j, respectively, at least k times up until time t. We can now finally define the increasing processes (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0 as S_j,+(t) := ∑_k=1^∞ S_j,+^k(t), S_j,-(t) := ∑_k=1^∞ S_j,-^k(t). A key observation is that these processes count the total number of instances that a mutation enters and exits size j, respectively, up until time t. To see why, note that ∑_k=1^∞ S_j,+^k(t) = ∑_i ∈ M_t∑_k=1^∞ 1_{I_j,+^i(t)≥ k} = ∑_i ∈ M_t∑_k=1^∞∑_ℓ=k^∞ 1_{I_j,+^i(t)=ℓ} = ∑_i ∈ M_t∑_ℓ=1^∞∑_k=1^ℓ 1_{I_j,+^i(t)=ℓ} = ∑_i ∈ M_t∑_ℓ=1^∞ℓ 1_{I_j,+^i(t)=ℓ} = ∑_i ∈ M_t I_j,+^i(t). Similar calculations hold for ∑_k=1^∞ S_j,-^k(t). Note that I_j,+^i(t) - I_j,-^i(t) = 1 if and only if C^i(t)=j, and I_j,+^i(t) - I_j,-^i(t)=0 otherwise. It follows that S_j(t) = S_j,+(t) - S_j,-(t). The fixed-time result (<ref>) will follow from limit theorems for S_j,+(t) and S_j,-(t), which in turn follow from approximation results for the subprocesses S_j,+^k(t) and S_j,-^k(t) for k ≥ 1. §.§ Approximation results for S_j,+^k(t) and S_j,-^k(t) We begin by establishing approximation results for S_j,+^k(t) and S_j,-^k(t) for each k ≥ 1. First, for the branching process (Z_0(t))_t ≥ 0 with Z_0(0)=1, set τ_j^-(0) := 0 and define recursively τ_j^+(k) := inf{s>τ_j^-(k-1): Z_0(s) = j}, τ_j^-(k) := inf{s>τ_j^+(k): Z_0(s) ≠ j}, k ≥ 1. Set p_j,+^k(t) := P(τ_j^+(k) ≤ t), p_j,-^k(t) := P(τ_j^-(k) ≤ t), which are the probabilities that the branching process enters and exits size j, respectively, at least k times up until time t. A key observation is that p_j(t)=P(Z_0(t)=j)=∑_k=1^∞(p_j,+^k(t)-p_j,-^k(t)), which follows from the fact that {Z_0(t)=j} = ⋃_k≥ 1{τ_j^+(k)≤ t, τ_j^-(k)>t} =⋃_k≥ 1{τ_j^+(k)≤ t}\{τ_j^-(k)≤ t}. In addition, we note that since almost surely, Z_0(t) → 0 or Z_0(t) →∞ as t →∞, there exists 0<θ<1 so that for each t ≥ 0, p_j,-^k(t) ≤ p_j,+^k(t) ≤ P(τ_j^+(k) < ∞) ≤θ^k. The approximation results for S_j,+^k(t) and S_j,-^k(t) can be established using almost identical arguments, so if suffices to analyze S_j,+^k(t). Recall that S_j,+^k(t) is the number of mutations whose clones enter size j at least k times up until time t. At any time s ≤ t, a mutation occurs at rate ν Z_0(s), and with probability p_j,+^k(t-s), its clone enters size j at least k times up until time t. This suggests the approximation S_j,+^k(t) ≈ν∫_0^t Z_0(s) p_j,+^k(t-s)ds =: S̅_j,+^k(t). Since e^-λ tZ_0(t) → Y as t →∞, we can further approximate for large t, S̅_j,+^k(t) ≈ν∫_0^t Y e^λ s p_j,+^k(t-s)ds =: Ŝ_j,+^k(t). For the remainder of the section, our goal is to establish bounds on the L^1-error associated with the approximations S_j,+^k(t) ≈S̅_j,+^k(t) ≈Ŝ_j,+^k(t). We first consider the approximation (<ref>). For Δ>0, define the Riemann sum S̅_j,+,Δ^k(t) := νΔ∑_ℓ=0^⌊ t/Δ⌋ Z_0(ℓΔ) p_j,+^k(t-ℓΔ). Clearly, lim_Δ→ 0S̅^k_j,+,Δ(t)=S̅^k_j,+(t) almost surely. In addition, for some C>0, S̅^k_j,+,Δ(t)≤ C t max_s ≤ t Z_0(s). Since (Z_0(s))_s ≥ 0 is a nonnegative submartingale, we can use Doob's inequality to show that C t E[max_s ≤ t Z_0(s)]<∞ for each t ≥ 0. Therefore, by dominated convergence, lim_Δ→ 0E|S̅^k_j,+,Δ(t)-S̅^k_j,+(t)|=0, t ≥ 0. It then follows from the triangle inequality that E|S_j,+^k(t)-S̅^k_j,+(t)| ≤lim_Δ→ 0E|S_j,+^k(t)-S̅^k_j,+,Δ(t)|, t ≥ 0. To bound the L^1-error of the approximation (<ref>), it therefore suffices to bound the right-hand side of (<ref>). We accomplish this in the following lemma. Let t>0 and Δ>0. There exists constants C_1>0 and C_2>0 independent of t, Δ and k such that E[(S^k_j,+(t)-S̅^k_j,+,Δ(t))^2]≤ C_1θ^kt^2e^λ t+C_2Δ e^3λ t. Section <ref>. We next turn to the approximation (<ref>). By the triangle inequality and the Cauchy-Schwarz inequality, we can write E |S̅^k_j,+(t)-Ŝ^k_j,+(t) | ≤ν∫_0^tE|Ye^λ s-Z_0(s)| p^k_j,+(t-s)ds ≤ν∫_0^t(E[(Ye^λ s-Z_0(s))^2])^1/2p^k_j,+(t-s)ds. By showing that E[(Ye^λ s-Z_0(s))^2] = C e^λ s for some C>0 and applying (<ref>), we can obtain the following bound on the L^1-error of the approximation (<ref>). E|S̅^k_j,+(t)-Ŝ^k_j,+(t)|=O(θ^ke^λ t/2). Section <ref>. Finally, from (<ref>), (<ref>) and (<ref>), it is straightforward to obtain a bound on the L^1-error of the approximation S_j,+^k(t) ≈Ŝ_j,+^k(t), which we state as Proposition <ref>. E|S_j,+^k(t)- Ŝ_j,+^k(t) | = O(θ^k/2 te^λ t/2). §.§ Limit theorems for S_j,+(t) and S_j,-(t) To establish limit theorems for S_j,+(t) and S_j,-(t), we define the approximations Ŝ_j,+(t) := ∑_k=1^∞Ŝ^k_j,+(t), Ŝ_j,-(t) := ∑_k=1^∞Ŝ^k_j,-(t). Focusing on the former approximation, we first argue that lim_t →∞ e^-λ tŜ_j,+(t) exists. Indeed, consider the following calculations for k ≥ 1 and t ≥ 0, where we use (<ref>): e^-λ t Ŝ_j,+^k(t) = ν e^-λ t ∫_0^t Ye^λ s p_j,+^k(t-s) ds = ν Y ∫_0^t e^-λ s p_j,+^k(s) ds ≤ν Y/λθ^k. The second equality shows that t ↦ e^-λ t Ŝ_j,+^k(t) is an increasing function, and the inequality shows that the function is bounded above by the summable sequence (ν Y / λ) θ^k. Therefore, t ↦ e^-λ tŜ_j,+(t) is increasing and bounded above, which implies that lim_t →∞ e^-λ tŜ_j,+(t) exists. The limit is given by lim_t →∞ e^-λ tŜ_j,+(t) = ν Y ∫_0^∞ e^-λ s(∑_k=1^∞ p_j,+^k(s)) ds. We next note that by the triangle inequality and Proposition <ref>, E|S_j,+(t)-Ŝ_j,+(t)| ≤∑_k=1^∞ E|S^k_j,+(t)-Ŝ^k_j,+(t)| = O(te^λ t/2), which implies that ∫_0^∞ e^-λ t E|S_j,+(t)-Ŝ_j,+(t)|dt < ∞. Combining (<ref>) with the fact that (S_j,+(t))_t ≥ 0 and (S_j,-(t))_t ≥ 0 are increasing processes, we can establish almost sure convergence results for e^-λ tS_j,+(t) and e^-λ tS_j,-(t). In the proof, we adapt an argument of Harris (Theorem 21.1 of <cit.>), with the L^1 condition (<ref>) replacing an analogous L^2 condition used by Harris. Conditional on Ω_∞, lim_t→∞ e^-λ t S_j,+(t) = ν Y ∫_0^∞ e^-λ s(∑_k=1^∞ p_j,+^k(s)) ds, lim_t→∞ e^-λ t S_j,-(t) = ν Y ∫_0^∞ e^-λ s(∑_k=1^∞ p_j,-^k(s)) ds, almost surely. Section <ref>. §.§ Proof of the fixed-time result (<ref>) To finish the proof of the fixed-time result (<ref>), it suffices to note that by (<ref>) and Proposition <ref>, lim_t →∞ e^-λ t(S_j,+(t)- S_j,-(t)) = ν Y ∫_0^∞ e^-λ s p_j(s) ds. Since S_j(t) = S_j,+(t)-S_j,-(t) by (<ref>), the result follows. §.§ Proof of the fixed-size result (<ref>) To prove the fixed-size result (<ref>), we note that by (<ref>), conditional on Ω_∞, lim_N →∞ e^-λτ_N S_j(τ_N) = ν Y ∫_0^∞ e^-λ s p_j(s) ds, almost surely. Since Ne^-λ t_N = Y by (<ref>), we also have lim_N →∞ e^-λ(τ_N-t_N)· N^-1 S_j(τ_N) = Y^-1lim_N →∞ e^-λτ_N S_j(τ_N) = ν∫_0^∞ e^-λ s p_j(s) ds, almost surely. By Proposition <ref> and the continuous mapping theorem, conditional on Ω_∞, lim_N →∞ e^-λ(τ_N-t_N) = 1, in probability. We can therefore conclude that conditional on Ω_∞, lim_N →∞ N^-1 S_j(τ_N) = ν∫_0^∞ e^-λ s p_j(s) ds, in probability, which is the desired result. § APPLICATION: ESTIMATION OF EXTINCTION PROBABILITY AND EFFECTIVE MUTATION RATE FOR BIRTH-DEATH PROCESS We conclude by briefly discussing how for the birth-death process, our results imply consistent estimators for the extinction probability p and the effective mutation rate ν/λ, given data on the SFS of all mutations found in the population. The estimator for p is based on the long-run proportion of mutations found in one individual. Recall that by (<ref>), this proportion is the same for the fixed-time and fixed-size SFS. By setting j=1 in (<ref>), the proportion can be written explicitly as (Section <ref>) φ_1(p) = 12, p=0, -p+qlog(q)plog(q), 0<p<1, where we recall that q=1-p. The function φ_1(p) is strictly decreasing in p and it takes values in (0,1/2]. If in a given population, the proportion of mutations found in one individual is observed to be x, we define an estimator for p by applying the inverse function of φ_1: p = p(x) := φ_1^-1(x). Technically, φ_1^-1 is only defined on (0,1/2], whereas the random number x may take any value in [0,1]. This can be addressed by extending the definition of φ_1^-1 so that φ_1^-1(x) := φ_1^-1(1/2) = 0 for x>1/2 and φ_1^-1(0) := lim_x → 0^+φ_1^-1(x) = 1. Since φ_1^-1 so defined is continuous, we can combine (<ref>) and (<ref>) with the continuous mapping theorem to see that whether the SFS is observed at a fixed time or a fixed size, the estimator in (<ref>) is consistent in the sense that p→ p in probability as t →∞ or N →∞. In other words, if the population is sufficiently large, its site frequency spectrum can be used to obtain an arbitrarily accurate estimate of p. Then, using the total number of mutations and the current size of the population, an estimate for ν/λ can be derived from (<ref>) or (<ref>). We refer to Section 5 of <cit.> for a more detailed discussion of this estimator, which includes an application of the estimator to simulated data. In the preceding discussion, we focused on the proportion of mutations found in one individual for illustration purposes. The point was to show that it is possible to define a consistent estimator for p and ν/λ using the SFS. If it is difficult to measure the number of mutations found in one individual, one can instead focus on the proportion of mutations found in j cells out of all mutations found in ≥ j cells for some j>1, denoted by φ_j(p) in (<ref>). As noted in Section <ref>, φ_j(p) is strictly decreasing in p for any j ≥ 1, and it takes values in (0,1/(j+1)]. We can therefore define a consistent estimator for p using the inverse function φ_j^-1(p). However, it should be noted that the range of φ_j(p) becomes narrower as j increases, which will likely affect the standard deviation of the estimator. § PROOFS §.§ Proof of Lemma <ref> Before considering the quantity of interest E[(S^k_j,+(t)-S̅^k_j,+,Δ(t))^2], we perform some preliminary calculations. Recall that ℳ_t is the set of mutations generated up until time t. For Δ>0 and any non-negative integer ℓ with ℓΔ < t, define A_ℓ,Δ to be the set of mutations created in the time interval [ℓΔ, min{(ℓ+1)Δ,t}), and note that ℳ_t=⋃_ℓ=0^⌊ t/Δ⌋A_ℓ,Δ. Define X_ℓ,Δ:=|A_ℓ,Δ| as the number of mutations created in [ℓΔ, min{(ℓ+1)Δ,t}). Note that conditional on F_(ℓ+1)Δ = σ(Z_0(s); s≤ (ℓ+1)Δ), X_ℓ,Δ∼(ν∫_ℓΔ^(ℓ+1)ΔZ_0(s)ds). Using this fact, it is easy to see that E[X_ℓ,Δ| F_(ℓ+1)Δ] = ν∫_ℓΔ^(ℓ+1)ΔZ_0(s)ds = Δν Z_0(ℓΔ)(1+O(Δ)) and E[X_ℓ,Δ^2| F_(ℓ+1)Δ] - E[X_ℓ,Δ| F_(ℓ+1)Δ] = E[X_ℓ,Δ| F_(ℓ+1)Δ]^2, which implies E[X_ℓ,Δ^2] - E[X_ℓ,Δ] = Δ^2 ν^2 E[Z_0(ℓΔ)^2](1+O(Δ)). For ease of presentation, we will for the remainder of the proof drop 1+O(Δ) multiplicative factors in calculations, as they will not affect the final result. Recall that for a mutation i ∈ M_t, I_j,+^i(t) is the number of times the clone containing mutation i reaches size j up until time t, see (<ref>). Define W^k_ℓΔ,t(j):=∑_i∈ A_ℓ,Δ1_{I_j,+^i(t)≥ k} as the number of mutations in A_ℓ,Δ whose clone reaches size j at least k times up until time t. Note that by the definition of S_j,+^k(t) in (<ref>), S_j,+^k(t) = ∑_ℓ=0^⌊ t/Δ⌋ W^k_ℓΔ,t(j). For i∈ A_ℓ,Δ, P(I_j,+^i(t)≥ k)=p_j,+^k(t-Δℓ)+O(Δ), where p_j,+^k(t) is defined as in (<ref>). Therefore, conditional on X_ℓ,Δ, W^k_ℓΔ,t(j) is a binomial random variable with parameters X_ℓ,Δ and p^k_j,+(t-ℓΔ)+O(Δ). Dropping 1+O(Δ) factors, this implies by (<ref>), E[W^k_ℓΔ,t(j)| F_(ℓ+1)Δ] =E[E[W^k_ℓΔ,t(j)|X_ℓ,Δ, F_(ℓ+1)Δ]| F_(ℓ+1)Δ] =p^k_j,+(t-ℓΔ) E[X_ℓ,Δ| F_(ℓ+1)Δ] = Δν p^k_j,+(t-ℓΔ) Z_0(ℓΔ), and by (<ref>) and (<ref>), E[W^k_ℓΔ,t(j)^2] =p^k_j,+(t-ℓΔ)^2E[X_ℓ,Δ^2]+p^k_j,+(t-ℓΔ)(1-p^k_j,+(t-ℓΔ))E[X_ℓ,Δ] =p^k_j,+(t-ℓΔ)^2(E[X_ℓ,Δ^2]-E[X_ℓ,Δ])+p^k_j,+(t-ℓΔ)E[X_ℓ,Δ] = p^k_j,+(t-ℓΔ)^2Δ^2ν^2E[Z_0(ℓΔ)^2]+p^k_j,+(t-ℓΔ) Δν E[ Z_0(ℓΔ)]. We are now ready to begin the main calculations. First, note that by (<ref>) and (<ref>), E[(S^k_j,+(t)-S̅^k_j,+,Δ(t))^2] = E[(∑_ℓ=0^⌊ t/Δ⌋(νΔ Z_0(ℓΔ)p^k_j,+(t-ℓΔ)-W^k_ℓΔ,t(j)))^2] = ∑_ℓ_2=0^⌊ t/Δ⌋∑_ℓ_1=0^⌊ t/Δ⌋E[(νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-W_ℓ_2Δ,t^k(j)). .(νΔ Z_0(Δℓ_1)p^k_j,+(t-Δℓ_1)-W^k_ℓ_1Δ,t(j))]. We first consider the diagonal terms in the double sum. Note first that by (<ref>), E[Z_0(ℓΔ) W^k_ℓΔ,t(j)] = Δν p_j,+^k(t-ℓΔ) E[Z_0(ℓΔ)^2], which implies by (<ref>), E[(νΔ Z_0(ℓΔ)p^k_j,+(t-Δℓ)-W_ℓΔ,t^k(j))^2] = ν^2Δ^2p^k_j,+(t-ℓΔ)^2E[Z_0(ℓΔ)^2]-2νΔ p^k_j,+(t-Δℓ)E[Z_0(ℓΔ)W^k_ℓΔ,t(j)]+E[W^k_ℓΔ,t(j)^2] = E[W^k_ℓΔ,t(j)^2]-ν^2Δ^2p^k_j,+(t-ℓΔ)^2E[Z_0(ℓΔ)^2] = νΔ p^k_j,+(t-ℓΔ)E[Z_0(ℓΔ)]. Next, we consider the cross terms for ℓ_1<ℓ_2: E[(νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-W^k_ℓ_2Δ,t(j))(νΔ Z_0(Δℓ_1)p^k_j,+(t-Δℓ_1)-W^k_ℓ_1Δ,t(j))] = νΔ p^k_j,+(t-Δℓ_1)E[Z_0(Δℓ_1)(νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-W^k_ℓ_2Δ,t(j))] - E[W^k_ℓ_1Δ,t(j)(νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-W^k_ℓ_2Δ,t(j))] = E[W^k_ℓ_1Δ,t(j)(W^k_ℓ_2Δ,t(j)-νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2))], where the final equality follows by combining (<ref>) with the fact that E[Z_0(Δℓ_1)(νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-W^k_ℓ_2Δ,t(j))] =E[E[Z_0(Δℓ_1) (νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2)-E[W^k_ℓ_2Δ,t(j)| F_(ℓ_2+1)Δ])| F_(ℓ_1+1)Δ]]. We can now rewrite (<ref>) as E[(S^k_j,+(t)-S̅^k_j,+,Δ(t))^2] =νΔ∑_ℓ=0^⌊ t/Δ⌋p^k_j,+(t-ℓΔ)E[Z_0(ℓΔ)] + 2∑_ℓ_1<ℓ_2 E[W^k_ℓ_1Δ,t(j)(W^k_ℓ_2Δ,t(j)-νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2))]. The remainder of the proof will focus on bounding the off-diagonal terms E[W^k_ℓ_1Δ,t(j)(W^k_ℓ_2Δ,t(j)-νΔ Z_0(Δℓ_2)p^k_j,+(t-Δℓ_2))]. We begin with the following lemma, which shows that in the limit as Δ→ 0, we can ignore the possibility of multiple mutations in time intervals of length Δ. For ℓ_1<ℓ_2, Δ>0 and t>0, E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j)]=P(W^k_ℓ_2Δ,t(j)=1,W^k_ℓ_1Δ,t(j)=1)+O(e^λΔℓ_1e^2λΔℓ_2Δ^3), E[Z_0(ℓ_2Δ)W^k_ℓ_1Δ,t(j)]=E[Z_0(ℓ_2Δ);W^k_ℓ_1Δ,t(j)=1]+O(e^λΔℓ_1e^2λΔℓ_2Δ^2). Section <ref>. By Lemma <ref>, instead of (<ref>) we can study the simpler difference P(W^k_ℓ_1Δ,t(j)=1,W^k_ℓ_2Δ,t(j)=1)-νΔ p^k_j,+(t-Δℓ_2)E[Z_0(Δℓ_2);W^k_ℓ_1Δ,t(j)=1]. For ease of notation, define I_1(ℓ_1,ℓ_2):=P(W^k_ℓ_1Δ,t(j)=1,W^k_ℓ_2Δ,t(j)=1), I_2(ℓ_1,ℓ_2):=νΔ p^k_j,+(t-Δℓ_2)E[Z_0(Δℓ_2);W^k_ℓ_1Δ,t(j)=1]. In the following calculations, we will use twice that P(W^k_ℓ_1Δ,t(j)=1|Z_0(Δℓ_1)=n) = nνΔ p^k_j,+(t-Δℓ_1). First consider the I_2(ℓ_1,ℓ_2) term, I_2(ℓ_1,ℓ_2)/νΔ p^k_j,+(t-Δℓ_2) =E[Z_0(Δℓ_2);W^k_ℓ_1Δ,t(j)=1] =∑_m=1^∞mP(Z_0(Δℓ_2)=m,W^k_ℓ_1Δ,t(j)=1) = ∑_m=1^∞∑_n=1^∞ m P(Z_0(Δℓ_2)=m,W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) = ∑_m=1^∞∑_n=1^∞ mP(Z_0(Δℓ_2)=m| W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) · P(W^k_ℓ_1Δ,t(j)=1|Z_0(Δℓ_1)=n)P(Z_0(Δℓ_1)=n) = νΔ p^k_j,+(t-Δℓ_1)∑_n=1^∞nP(Z_0(Δℓ_1)=n) ·∑_m=1^∞ mP(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n). Next we consider the I_1(ℓ_1,ℓ_2) term, I_1(ℓ_1,ℓ_2)=P(W^k_ℓ_1Δ,t(j)=1,W^k_ℓ_2Δ,t(j)=1) = ∑_n=1^∞ P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) · P(W^k_ℓ_1Δ,t(j)=1|Z_0(Δℓ_1)=n)P(Z_0(Δℓ_1)=n) = νΔ p^k_j,+(t-Δℓ_1)∑_n=1^∞ nP(Z_0(Δℓ_1)=n)P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = νΔ p^k_j,+(t-Δℓ_1)∑_n=1^∞ nP(Z_0(Δℓ_1)=n) ·∑_m=1^∞ P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) · P(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n). We can therefore write I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2) = νΔ p^k_j,+(t-Δℓ_1)∑_n=1^∞ nP(Z_0(Δℓ_1)=n) ·∑_m=1^∞ P(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) ·(P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1). .- mνΔ p^k_j,+(t-Δℓ_2)). We can use (<ref>) to show that there exists a constant C>0 so that I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2) ≤ CΔ^2θ^ke^λΔℓ_2, where θ is obtained from (<ref>). The proof is deferred to the following lemma. For ℓ_1<ℓ_2, Δ>0 and t>0, (<ref>) holds. Section <ref>. Returning to (<ref>), we can finally use Lemmas <ref> and <ref> to conclude that there exist positive constants C_1, C_2 and C_3 such that E[(S^k_j,+(t)-S̅^k_j,+,Δ(t))^2] =νΔ∑_ℓ=0^⌊ t/Δ⌋p^k_j,+(t-ℓΔ)E[Z_0(ℓΔ)] + 2∑_ℓ_1<ℓ_2(I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2))+C_3Δ e^3λ t ≤ C_1 θ^kte^λ t+C_2θ^kt^2e^λ t+C_3Δ e^3λ t. This concludes the proof. §.§ Proof of Lemma <ref> Using that E[Y| F_s] = e^-λ sZ_0(s), see Section <ref>, we begin by writing E[(Ye^λ s-Z_0(s))^2] = E[Z_0(s)^2]-2e^λ sE[YZ_0(s)]+e^2λ sE[Y^2] = e^2λ sE[Y^2]-E[Z_0(s)^2]. From expression (5) of Chapter III.4 of <cit.>, we know there exist positive constants c_1 and c_2 such that E[Z_0(s)^2]=c_1e^2λ s-c_2e^λ s. If we establish that E[Y^2]=c_1, then it will follow that E[(Ye^λ t-Z_0(t))^2]=c_2e^λ t, which is what we need to prove Lemma <ref>. To this end, note that Theorem 1 of IV.11 in <cit.> implies that E[(Z_0(t)e^-λ t)^2]→ E[Y^2] as t→∞. And from (<ref>), we know that lim_t→∞e^-2λ tE[Z_0(t)^2]=c_1. Therefore, E[Y^2] = c_1, which concludes the proof. §.§ Proof of Proposition <ref> Since S_j,+(t) is increasing in t, e^-λ(t+τ)S_j,+(t+τ)≥ e^-λτe^-λ tS_j,+(t), t,τ≥ 0. In Section <ref>, it is shown that Ŝ := lim_t →∞ e^-λ tŜ_j,+(t) exists, and the limit is positive on Ω_∞ since Y>0, see (<ref>). Suppose there is an ω∈Ω_∞ such that the statement lim_t→∞e^-λ t(S_j,b(t)-Ŝ_j,b(t))=0 is not true and we first suppose lim sup_t→∞e^-λ tS_j,+(t,ω) > Ŝ(ω). For notational convenience, we will drop the ω in what follows. If (<ref>) is true, there is a δ>0 and a sequence of real numbers t_1<t_2<… such that t_i+1 - t_i>δ/λ(2+2δ) and e^-λ t_iS_j,+(t_i) > Ŝ(1+δ) for i = 1,2,…. Then e^-λ(t_i+τ)S_j,+(t_i+τ)≥ e^-λτe^-λ t_iS_j,+(t_i)≥ (1-λτ)Ŝ(1+δ). Also, there exists t_0 so that for t>t_0, e^-λ tŜ_j,+(t)< Ŝ(1+δ/2). Therefore, for t_i>t_0, ∫_t_i^t_i+1| e^-λ tS_j,+(t)-e^-λ tŜ_j,+(t)|dt ≥∫_t_i^t_i+δ/λ(2+2δ) | e^-λ tS_j,+(t)-e^-λ tŜ_j,+(t)|dt ≥∫_t_i^t_i+δ/λ(2+2δ)(e^-λ tS_j,+(t)-e^-λ tŜ_j,+(t))dt ≥Ŝ∫_0^δ/λ(2+2δ)( (1-λτ)(1+δ)-(1+δ/2))dτ = Ŝ·δ^2/8λ(1+δ), from which it follows that ∫_0^∞| e^-λ tS_j,+(t)-e^-λ tŜ_j,+(t)|dt = ∞. By (<ref>), we see that the inequality (<ref>) cannot hold on a set of positive probability. Now suppose that lim inf_t→∞e^-λ tS_j,+(t,ω) < Ŝ(ω) for some ω∈Ω_∞. Then there is a sequence of real numbers t_1<t_2<… with t_i+1-t_i>δ/λ(2-δ) and a real number 0< δ<1 such that e^-λ t_iS_j,+(t_i)<(1-δ)Ŝ. Therefore, e^-λ (t_i-τ)S_j,+(t_i-τ)≤ (1-δ)Ŝe^λτ≤(1-δ)Ŝ/1-λτ, 0≤τ < 1/λ. Also, there exists t_0 so that for t>t_0, e^-λ tŜ_j,+(t)> (1-δ/2)Ŝ. Therefore, ∫_t_i^t_i+1|e^-λ tS_j,+(t) - e^-λ tŜ_j,+(t)|dt ≥∫_t_i+1-δ/λ(2-δ)^t_i+1|e^-λ tS_j,+(t) - e^-λ tŜ_j,+(t)|dt ≥∫_t_i+1-δ/λ(2-δ)^t_i+1( e^-λ tŜ_j,+(t)-e^-λ tS_j,+(t))dt ≥Ŝ∫_0^δ/λ(2-δ)( (1-δ/2)-(1-δ)/(1-λτ))dτ = Ŝ(δ/2λ+1-δ/λlog2-2δ/2-δ), where we can verify that δ/2λ+1-δ/λlog2-2δ/2-δ>0 when δ<1. Hence ∫_0^∞| e^-λ tS_j,+(t)-e^-λ tŜ_j,+(t)|dt = ∞, which allows us to conclude that (<ref>) cannot hold on a set of positive probability. We can now conclude that on Ω_∞, lim_t →∞ e^-λ tS_j,+(t) = Ŝ almost surely, which is the desired result. §.§ Proof of Lemma <ref> We will only prove the first statement, the proof of the second statement being largely the same. To that end, it suffices to show that E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j); W^k_ℓ_2Δ,t(j)>1]+E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j); W^k_ℓ_1Δ,t(j)>1] =O(e^λΔℓ_1e^2λΔℓ_2Δ^3), with ℓ_1<ℓ_2. Again, we will only show that the first term satisfies the bound, the proof for the second term being largely the same. We first note that since W^k_ℓ_1Δ,t(j)≤ X_ℓ_1,Δ, E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}] =E[E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}|ℱ_Δ (ℓ_1+1)]] ≤ E[E[X_ℓ_1,ΔW^k_ℓ_2Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}|ℱ_Δ (ℓ_1+1)]] = E[E[X_ℓ_1,Δ|ℱ_Δ (ℓ_1+1)]E[W^k_ℓ_2Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}|ℱ_Δ (ℓ_1+1)]]. The final equality follows because the number of mutations created in the interval [Δℓ_1, Δℓ_1 +Δ) is independent of the number of mutations created in [Δℓ_2, Δℓ_2+Δ) and their fate, given the population size up until time Δ (ℓ_1+1). Therefore, using (<ref>), W^k_ℓ_2Δ,t(j)≤ X_ℓ_2,Δ and (<ref>), E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}] ≤νΔ E[Z_0(Δℓ_1)E[E[W^k_ℓ_2Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}|ℱ_Δ(ℓ_2+1)]|ℱ_Δ (ℓ_1+1)]] ≤νΔ E[Z_0(Δℓ_1)E[E[X_ℓ_2,Δ1_{X_ℓ_2,Δ>1}|ℱ_Δ(ℓ_2+1)]|ℱ_Δ (ℓ_1+1)]] ≤νΔ E[Z_0(Δℓ_1)E[E[X_ℓ_2,Δ(X_ℓ_2,Δ-1)|ℱ_Δ(ℓ_2+1)]|ℱ_Δ (ℓ_1+1)]] = ν^3 Δ^3 E[Z_0(Δℓ_1)E[Z_0(Δℓ_2)^2|ℱ_Δ (ℓ_1+1)]]. We then use that for s ≤ t, E[Z_0(t)^2|ℱ_s]=e^2λ(t-s)Z_0(s)^2+(Z_0(t-s))Z_0(s), E[Z_0(Δℓ_2)^2|ℱ_Δ (ℓ_1+1)] =e^2λΔ(ℓ_2-ℓ_1-1)Z_0(Δ (ℓ_1+1))^2+(Z_0(Δ(ℓ_2-ℓ_1-1)))Z_0(Δ (ℓ_1+1)) to conclude that E[W^k_ℓ_2Δ,t(j)W^k_ℓ_1Δ,t(j)1_{W^k_ℓ_2Δ,t(j)>1}] ≤ν^3Δ^3e^2λΔ(ℓ_2-ℓ_1-1)E[Z_0(Δℓ_1)Z_0(Δ (ℓ_1+1))^2] +ν^3Δ^3(Z_0(Δ(ℓ_2-ℓ_1-1)))E[Z_0(Δℓ_1)Z_0(Δ (ℓ_1+1))] = ν^3Δ^3e^2λΔ(ℓ_2-ℓ_1)E[Z_0(Δℓ_1)^3] + ν^3Δ^3e^2λΔ(ℓ_2-ℓ_1-1)(Z_0(Δ))E[Z_0(Δℓ_1)^2] + ν^3Δ^3(Z_0(Δ(ℓ_2-ℓ_1-1)))e^λΔE[Z_0(Δℓ_1)^2]. The desired result now follows from the assumption that the offspring distribution has a finite third moment and thus E[Z_0(t)^3]=O(e^3λ t) by Lemma 5 of <cit.>. §.§ Proof of Lemma <ref> Let ℓ be a positive integer and let s>0 such that ℓΔ+Δ< s. On the event {X_ℓ,Δ=1}, define D^j_ℓΔ(s) to be the number of disjoint intervals in [0,s] that the mutation at time ℓΔ is present in j individuals, and let B_ℓΔ(s) be the number of individuals alive at time s descended from the mutation at time ℓΔ. Note that P(W^k_ℓΔ,t(j)=1) = P(X_ℓ,Δ=1,D^j_ℓΔ(t)≥ k)(1+O(Δ)). On {X_ℓ_1,Δ=1,X_ℓ_2,Δ=1} with ℓ_1<ℓ_2, let A denote the event that the mutation at time ℓ_2Δ occurs in the clone started by the mutation at time ℓ_1Δ. We now consider the first term inside the parenthesis in (<ref>), and break it up based on the value of B_ℓ_1Δ(ℓ_2Δ) and whether A occurs or not. Once again, we refrain from writing 1+O(Δ) multiplicative factors. P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = ∑_i=1^mP(W^k_ℓ_2Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = ∑_i=1^mP(W^k_ℓ_2Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i,A|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) + ∑_i=1^mP(W^k_ℓ_2Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i,A^c|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1). Note that P(W^k_ℓ_2Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i,A|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = P(W^k_ℓ_2Δ,t(j)=1,A|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i) · P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = P(W^k_ℓ_2Δ,t(j)=1,A,D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,B_ℓ_1Δ(ℓ_2Δ)=i) ·P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,B_ℓ_1Δ(ℓ_2Δ)=i) ≤ P(W^k_ℓ_2Δ,t(j)=1,A|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,B_ℓ_1Δ(ℓ_2Δ)=i) ·P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,B_ℓ_1Δ(ℓ_2Δ)=i), and P(W^k_ℓ_2Δ,t(j)=1,A|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,B_ℓ_1Δ(ℓ_2Δ)=i) = i νΔ p^k_j,+(t-Δℓ_2). Also note that P(W^k_ℓ_2Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i,A^c|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = P(W^k_ℓ_2Δ,t(j)=1,A^c|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1,B_ℓ_1Δ(ℓ_2Δ)=i) · P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = (m-i)νΔ p^k_j,+(t-Δℓ_2) · P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1). It follows that P(W^k_ℓ_2Δ,t(j)=1|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) ≤ν p^k_j,+(t-Δℓ_2) ·(∑_i=1^miΔP(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i). + . ∑_i=1^m(m-i)Δ P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)) ≤νΔ p^k_j,+(t-Δℓ_2) ·(m+ ∑_i=1^m i P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i)). Going back to (<ref>), we can then derive the upper bound I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2) ≤ν^2Δ^2 p^k_j,+(t-Δℓ_1)p^k_j,+(t-Δℓ_2) ∑_n=1^∞ nP(Z_0(Δℓ_1)=n) ·∑_m=1^∞ P(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) ·∑_i=1^m i P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i). Note that P(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) · P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1) = P(B_ℓ_1Δ(ℓ_2Δ)=i,Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/ P(W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) and P(B_ℓ_1Δ(ℓ_2Δ)=i,Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i) = P(B_ℓ_1Δ(ℓ_2Δ)=i,Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ=1,D^j_ℓ_1Δ(t)≥ k)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i) = P(Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(Δℓ_2)=i). It follows that P(Z_0(Δℓ_2)=m|W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) ·P(B_ℓ_1Δ(ℓ_2Δ)=i|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,W^k_ℓ_1Δ,t(j)=1)/P(D^j_ℓ_1Δ(t)≥ k|Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(ℓ_2Δ)=i) = P(Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(Δℓ_2)=i)/P(W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n). Since P(Z_0(Δℓ_1)=n)/P(W^k_ℓ_1Δ,t(j)=1,Z_0(Δℓ_1)=n) = 1/P(W^k_ℓ_1Δ,t(j)=1|Z_0(Δℓ_1)=n) = 1/nνΔ p^k_j,+(t-Δℓ_1), we can write I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2) ≤Δν p^k_j,+(t-Δℓ_2) ·∑_n=1^∞∑_m=1^∞∑_i=1^m i P(Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(Δℓ_2)=i). Now, P(Z_0(Δℓ_2)=m,Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1,B_ℓ_1Δ(Δℓ_2)=i) = P(Z_0(Δℓ_2)=m, B_ℓ_1Δ(Δℓ_2)=i| Z_0(Δℓ_1)=n,X_ℓ_1,Δ = 1) · P(X_ℓ_1,Δ=1|Z_0(Δℓ_1)=n) P(Z_0(Δℓ_1)=n) = nνΔ P(Z_0(Δℓ_1)=n) p_i(Δ (ℓ_2-ℓ_1)) p_n-1,m-i(Δ(ℓ_2-ℓ_1)), where we recall that p_n,m(t) = P(Z_0(t)=m|Z_0(0)=n) and p_m(t) = p_1,m(t). It follows that I_1(ℓ_1,ℓ_2)-I_2(ℓ_1,ℓ_2) ≤ν^2Δ^2 p^k_j,+(t-Δℓ_2) ∑_n=1^∞ nP(Z_0(Δℓ_1)=n) ∑_m=1^∞∑_i=1^m i p_n-1,m-i(Δ(ℓ_2-ℓ_1)) p_i(Δ (ℓ_2-ℓ_1)) = ν^2Δ^2 p^k_j,+(t-Δℓ_2) ∑_n=1^∞ nP(Z_0(Δℓ_1)=n)∑_i=1^∞ ip_i(Δ (ℓ_2-ℓ_1)) ∑_m=i^∞ p_n-1,m-i(Δ(ℓ_2-ℓ_1)) = ν^2Δ^2 p^k_j,+(t-Δℓ_2) ∑_n=1^∞ nP(Z_0(Δℓ_1)=n)∑_i=1^∞ ip_i(Δ (ℓ_2-ℓ_1)) = ν^2Δ^2 p^k_j,+(t-Δℓ_2)e^λΔℓ_2≤ν^2Δ^2 θ^k e^λΔℓ_2. This is the desired result. §.§ Proof of Proposition <ref> To begin with, define the extinction time of the branching process with Z_0(0)=1 as τ_0=inf{t>0:Z_0(t)=0}, and note that the extinction probability p=P(τ_0<∞) satisfies p∈ [0,1) by the assumption m>1. We want to prove that for any >0, lim_N→∞P(|τ_N-t_N|>|Ω_∞)=0, where τ_N and t_N are defined by (<ref>) and (<ref>), respectively. We begin by establishing a simple lower bound on τ_N for large N. For ρ∈ (0,1) define s_N(ρ):=ρ/λlog(N). Then P(τ_N<s_N(ρ)) = O(N^2(ρ-1)). Since m>1, we know that (Z_0(t))_t ≥ 0 is a submartingale. Therefore, P(τ_N<s_N(ρ))=P(sup_t≤ s_N(ρ)Z_0(t) ≥ N)≤1/N^2E[Z_0(s_N(ρ))^2]=O(N^2(ρ-1)). We next establish a simple result about the rate of convergence of e^-λ t Z_0(t) → Y. For z>0, lim_a→∞P(sup_t≥ a |Z_0(t)e^-λ t-Y|≥ zY,Ω_∞)=0. Fix a>0 and δ>0. On the event Ω_∞, Y is a random variable on (0,∞) with a strictly positive continuous density function, see (<ref>). Thus, there exists η>0 such that P(Y<η,Ω_∞)≤δ. We can therefore write P(sup_t≥ a |Z_0(t)e^-λ t-Y|≥ zY,Ω_∞) ≤δ + P(sup_t≥ a |Z_0(t)e^-λ t-Y|≥ zη,Ω_∞). For arbitrary b>a, we see from the triangle inequality that sup_a≤ t≤ b |Z_0(t)e^-λ t-Y |≤ |Z_0(a)e^-λ a-Y |+sup_a≤ t≤ b |Z_0(t)e^-λ t-Z_0(a)e^-λ a |. Thus, for z>0, P(sup_a≤ t≤ b |Z_0(t)e^-λ t-Y | ≥ zη,Ω_∞) ≤ P( |Z_0(a)e^-λ a-Y |≥ zη/2)+P(sup_a≤ t≤ b |Z_0(t)e^-λ t-Z_0(a)e^-λ a |≥ zη/2). Since by (<ref>), E[(Z_0(a)e^-λ a-Y)^2]=O(e^-λ a), E[(Z_0(a)e^-λ a-Z_0(b)e^-λ b)^2]=O(e^-λ a), Markov's and Doob's inequalities can be applied to (<ref>) to see that P(sup_a≤ t≤ b |Z_0(t)e^-λ t-Y | ≥ zη,Ω_∞)=O(e^-λ a/η^2 z^2). Since P(sup_t≥ a |Z_0(t)e^-λ t-Y |≥ zη,Ω_∞)=lim_b→∞P(sup_a≤ t≤ b |Z_0(t)e^-λ t-Y |≥ zη,Ω_∞), it follows that lim sup_a→∞P(sup_t≥ a |Z_0(t)e^-λ t-Y|≥ zY,Ω_∞)≤δ, and because δ is arbitrary the desired result follows. We are now ready to analyze the difference τ_N-t_N on Ω_∞. We first consider the case τ_N<t_N-. Define the difference function ω_0(t)=Z_0(t)-Y e^λ t. On Ω_∞, by the definition of t_N in (<ref>), Z_0(τ_N) =Ye^λτ_N+ω_0(τ_N) = Ne^λ(τ_N-t_N)+ω_0(τ_N), which implies for τ_N<t_N-, ω_0(τ_N)=N(1-e^λ(τ_N-t_N))+(Z_0(τ_N)-N) ≥ N(1-e^λ(τ_N-t_N))≥ N(1-e^-λ). Take 0<ρ<1. Applying Lemma <ref>, P(τ_N<t_N-,Ω_∞) ≤ P(ω_0(τ_N)≥ N(1-e^-λ),τ_N<t_N-,Ω_∞) ≤ P(τ_N ≤ s_N(ρ)) + P(ω_0(τ_N)≥ N(1-e^-λ),s_N(ρ)<τ_N<t_N-,Ω_∞) = O(N^2(ρ-1))+ P(ω_0(τ_N)≥ N(1-e^-λ),s_N(ρ)<τ_N<t_N-,Ω_∞). Thus we consider P(ω_0(τ_N)≥ N(1-e^-λ),s_N(ρ)<τ_N<t_N-,Ω_∞) ≤ P(sup_s_N(ρ)<t<t_N-(Z_0(t)-Ye^λ t)≥ N(1-e^-λ),Ω_∞) ≤ P(sup_s_N(ρ)<t<t_N-(Z_0(t)e^-λ t-Y)e^λ(t_N-)≥ N(1-e^-λ),Ω_∞) ≤ P(sup_s_N(ρ)<t(Z_0(t)e^-λ t-Y)≥ Y(e^λ-1),Ω_∞), where in the last step, we use the defition of t_N. We can now apply Lemma <ref> to get lim_N→∞ P(τ_N<t_N-,Ω_∞)=0. We next consider τ_N>t_N+. Note that on the event {τ_N>t_N+}∩Ω_∞, -ω_0(t_N+)=Ye^λ(t_N+ε)-Z_0(t_N+)=Ne^λ-Z_0(t_N+)≥ N(e^λ-1). Therefore, P(τ_N>t_N+,Ω_∞) ≤ P(Ye^λ(t_N+)-Z_0(t_N+)≥ N(e^λ-1),Ω_∞) = P(Y-Z_0(t_N+)e^-λ(t_N+)≥ Y(1-e^-λ),Ω_∞). Since P(t_N ≤1/2λlog(N),Ω_∞) = P(Y ≥√(N),Ω_∞) → 0 as N →∞, we can write P(Y-Z_0(t_N+)e^-λ(t_N+)≥ Y(1-e^-λ),Ω_∞) ≤ P(Y≥√(N))+P(sup_t>1/2λlog(N)(Y-e^-λ tZ_0(t)) ≥ Y(1-e^-λ),Ω_∞). We can then apply Lemma <ref> to get lim_N→∞ P(τ_N>t_N+,Ω_∞)=0, which concludes the proof. §.§ Proof of Proposition <ref> We use a similar argument to the proof of Theorem <ref>. First, we break the total number of mutations M(t) into M(t) = M_+(t) - M_-(t), where M_+(t) represents the total number of mutations generated up until time t, and M_-(t) represents the number of mutations which belong to M_+(t) but die out before time t. Obviously, these two processes are increasing in time. The limit theorems for M(t) will follow from limit theorems for M_+(t) and M_-(t). Because of the almost identical arguments, we will focus on the analysis of M_+(t). As in the proof of Theorem <ref>, we define the approximations M̂_+(t) :=ν∫_0^t Ye^λ sds and M̅_+(t) := ν∫_0^t Z_0(s)ds, as well as the Riemann sum approximation M̅_+,Δ(t) := νΔ∑_ℓ=0^⌊ t/Δ⌋ Z_0(ℓΔ) . Note that the only difference between (<ref>) and (<ref>) is the probability p_j,+^k(t-s) which does not appear in (<ref>). Therefore, we can simply follow the proofs of Lemmas <ref> and <ref> by replacing S^k_j,+(t), Ŝ^k_j,+(t), S̅^k_j,+(t), S̅^k_j,+,Δ(t) and θ with M_+(t), M̂_+(t), M̅_+(t), M̅_+,Δ(t) and 1, respectively, and we will get E|M_+(t)- M̂_+(t) | = O( te^λ t/2), which implies ∫_0^∞ e^-λ t E|M_+(t)-M̂_+(t)|dt < ∞. Note that lim_t→∞e^-λ tM̂_+(t) =ν Y /λ exists and M_+(t) is an increasing process. By replacing the corresponding terms in the proof of Proposition <ref>, we can get lim_t→∞ e^-λ t M_+(t) =ν Y ∫_0^∞ e^-λ sds = ν Y /λ, almost surely. Similarly, lim_t→∞ e^-λ t M_-(t) = ν Y ∫_0^∞ e^-λ sp_0(s)ds, almost surely. The fixed-time result (<ref>) follows immediately from (<ref>) and (<ref>). Then, by following the proof in Section <ref>, we can get the fixed-size result (<ref>) for the total number of mutations, lim_N →∞ N^-1 M(τ_N) = ν∫_0^∞ e^-λ s (1-p_0(s)) ds, in probability. §.§ Proof of Corollary <ref> * For the birth-death process, we can write p_0(t) = p(e^λ t-1)/e^λ t-p, p_j(t) = q^2 e^λ t/(e^λ t-p)^2·(e^λ t-1/e^λ t-p)^j-1, j ≥ 1, see expression (B.1) in <cit.>. Therefore, for j ≥ 1, ∫_0^∞ e^-λ s p_j(s) ds = 1/λ∫_0^∞q^2 e^-λ s/(1-p e^-λ s)^2·(1-e^-λ s/1-p e^-λs)^j-1·λ e^-λ s ds. Using the substitution x := e^-λ s, dx = -λ e^-λ sds, we obtain ∫_0^∞ e^-λ s p_j(s) ds = q^2/λ∫_0^1x/(1-p x)^2·(1-x/1-p x)^j-1 dx. We again change variables, this time y := (1-x)/(1-p x), in which case [ x = (1-y)/(1-p y),; dx= -(q/(1-p y)^2)dy,; 1-p x = q/(1-p y). ] In addition, y = 1 for x=0 and y=0 for x=1, which implies ∫_0^∞ e^-λ s p_j(s) ds = q/λ∫_0^1 (1-p y)^-1 (1-y) y^j-1 dy. To get the sum representation in (<ref>), it suffices to note that ∫_0^1 (1-py)^-1 (1-y) y^j-1 dy = ∑_k=0^∞ p^k ( ∫_0^1 (1-y) y^j+k-1 dy) = ∑_k=0^∞p^k/(j+k)(j+k+1). To get the pure-birth process result, it suffices to note that p=0, q=1 and ∫_0^1 (1-y) y^j-1dy = 1/j(j+1). * Follows from the same calculations as in (1). * By (<ref>), for the birth-death process, 1-p_0(t) = (1-p)e^λ t/e^λ t-p = qe^λ t/e^λ t-p. Therefore, ∫_0^∞ e^-λ s (1-p_0(s)) ds = 1/λ∫_0^∞q/1-pe^-λ s·λ e^-λ s ds. Using the substitution x := e^-λ s, dx = -λ e^-λ sds, we obtain ∫_0^∞ e^-λ s (1-p_0(s)) ds = 1λ∫_0^1q/1-p x dx = 1λ, p=0, - q log(q)λ p, 0<p<1. * Follows from the same calculations as in (3). §.§ Derivation of expression (<ref>) By writing M_j(t) = M(t) - ∑_k=0^j-1 S_k(t), it follows from Corollary <ref> that conditional on Ω_∞, lim_t →∞ e^-λ t M_j(t) = ν q Y/λ∫_0^1 (1-py)^-1 (1-y) ∑_k=j^∞ y^k-1 dy = ν q Y/λ∫_0^1 (1-py)^-1 y^j-1 dy. Similarly, lim_N →∞ N^-1 M_j(τ_N) = ν q/λ∫_0^1 (1-py)^-1 y^j-1 dy. It follows that lim_t →∞S_j(t)/M_j(t) = lim_N →∞S_j(τ_N)/M_j(τ_N) = ∫_0^1 (1-p y)^-1 (1-y) y^j-1 dy/∫_0^1 (1-py)^-1 y^j-1 dy = 1-∫_0^1 (1-p y)^-1 y^j dy/∫_0^1 (1-py)^-1 y^j-1 dy =: φ_j(p). §.§ Proof that φ_j(p) is strictly decreasing Here, we show that for each j ≥ 1, φ_j(p) given by the last expression in Section <ref> is strictly decreasing in p. Set a := ( ∫_0^1 (1-p y)^-2 y^j+1 dy)(∫_0^1 (1-py)^-1 y^j-1 dy), b := (∫_0^1 (1-py)^-2 y^j dy) (∫_0^1 (1-p y)^-1 y^j dy). It suffices to show that a>b for each p ∈ (0,1). First, note that we can write a = ∫_0^1∫_0^1 (1-p y)^-2 y^j+1 (1-px)^-1 x^j-1 dy dx and b = ∫_0^1∫_0^1 (1-py)^-2 y^j (1-px)^-1 x^j dy dx, which implies a-b = ∫_0^1 ∫_0^1 (1-py)^-2(1-px)^-1y^jx^j-1(y-x)dydx = ∫_0^1 ∫_0^x (1-py)^-2(1-px)^-1y^jx^j-1(y-x)dydx + ∫_0^1 ∫_x^1 (1-py)^-2(1-px)^-1y^jx^j-1(y-x)dydx. The latter integral can be rewritten as follows: ∫_0^1 ∫_x^1 (1-py)^-2(1-px)^-1y^jx^j-1(y-x)dydx = ∫_0^1 ∫_0^y (1-py)^-2(1-px)^-1y^jx^j-1(y-x)dxdy = -∫_0^1 ∫_0^x (1-px)^-2(1-py)^-1x^jy^j-1(y-x)dydx which implies a-b = ∫_0^1 ∫_0^x (1-py)^-1(1-px)^-1y^j-1x^j-1(y-x)((1-py)^-1y-(1-px)^-1x)dydx. Since y/1-py - x/1-px = y-x/(1-py)(1-px), we can finally conclude that a-b = ∫_0^1 ∫_0^x (1-py)^-2(1-px)^-2y^j-1x^j-1(y-x)^2dydx > 0 for each p ∈ (0,1). §.§ Derivation of expression (<ref>) To derive expression (<ref>) in the main text, we note that (1-py)^-1 = ∑_k=0^∞ (py)^k for 0 < p < 1 and 0 ≤ y ≤ 1, which implies ∫_0^1 (1-p y)^-1 (1-y) dy = ∑_k=0^∞ p^k ∫_0^1 y^k(1-y) dy = ∑_k=0^∞p^k/k+1 - ∑_k=0^∞p^k/k+2. Since ∑_k=1^∞x^k/k = -log(1-x), we obtain ∫_0^1 (1-p y)^-1 (1-y) dy = - log(q)/p - 1/p^2(-log(q) - p) = q/p^2log(q) + 1/p. Therefore, applying expression (<ref>), we can write for 0<p<1, φ_1(p) = -p/log(q)∫_0^1 (1-p y)^-1 (1-y) dy = -p+qlog(q)/plog(q). § ACKNOWLEDGMENTS EBG was supported in part by NSF grant CMMI-1552764, NIH grant R01 CA241137, funds from the Norwegian Centennial Chair grant and the Doctoral Dissertation Fellowship from the University of Minnesota. K. Leder was supported in part with funds from NSF award CMMI 2228034 and Research Council of Norway Grant 309273. ieeetr
http://arxiv.org/abs/2307.01530v1
20230704073353
Convolutional Transformer for Autonomous Recognition and Grading of Tomatoes Under Various Lighting, Occlusion, and Ripeness Conditions
[ "Asim Khan", "Taimur Hassan", "Muhammad Shafay", "Israa Fahmy", "Naoufel Werghi", "Lakmal Seneviratne", "Irfan Hussain" ]
cs.CV
[ "cs.CV", "cs.AI", "eess.IV" ]
Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations Anil Osman Tur1,2^* Nicola Dall'Asen1,3^* Cigdem Beyan1 Elisa Ricci1,2 August 1, 2023 ======================================================================================================== empty § INTRODUCTION Plants play a pivotal role in meeting global food demands. Among the most widely consumed vegetables are tomatoes, with annual production surpassing 180 million tons for the past seven years <cit.>. Commercially, tomatoes are typically harvested during the mature ripening stage. This practice is primarily due to their firmness, extended shelf-life, and the potential to turn red after being removed from the plant <cit.>. The decision to harvest at this stage is largely influenced by consumer preferences for fresh tomatoes, particularly their color and texture <cit.>, as well as the need to minimize potential damage during transportation and other supply chain-related activities. In academic research, the role of technology in optimizing agricultural practices is highly emphasized. A particular area of interest for scholars lies in the detection and classification of crops, where deep learning and image processing techniques are utilized. Furthermore, automation in agriculture can enhance the working conditions of farmers and agricultural workers, who often face musculoskeletal disorders. The introduction of robots for crop monitoring and harvesting has proven highly beneficial, leading to significant improvements in production profits. These benefits are realized through streamlining the harvesting process, enhancing crop quality and yield, and reducing labor costs. These advantages have spurred extensive research over the past few decades, particularly on the improvements and potential applications of robotic technology in agriculture. Whether referred to as "precision agriculture" or "low-impact farming", this approach forms an integral part of a broader shift within the agricultural industry. Additionally, advancements in computer vision can significantly enhance the agricultural sector by increasing efficiency and accuracy in various tasks, such as crop assessment and harvesting. Machine learning (ML) methodologies play a significant role in automating processes such as the categorization of plant diseases, fruit maturity grading, and automated harvesting methods <cit.>. ML tools aid in monitoring plant health and predicting potential abnormalities at early stages <cit.>. Over the years, various ML models have been developed, including Artificial Neural Networks and Support Vector Machines (SVM) <cit.>. With the advent of Deep Learning (DL),several new models such as VGG <cit.>, R-FCN <cit.>, Faster R-CNN <cit.>,and SSD <cit.>; have been introduced, providing fundamental frameworks to perform object detection and recognition tasks. Some of these methodologies find application in agricultural automation systems, aiding in identifying and classifying crops and their diseases. Notably, the advent of DL has led to promising results and methods in the agricultural domain. Advancements in deep learning have made it possible to employ Convolutional Neural Networks (CNNs) in tasks such as fruit classification and yield estimation. For instance, Faster R-CNN <cit.> has been utilized for apple detection <cit.>, and YOLO has been applied to detect mangoes <cit.>. Sun et al. <cit.> proposed an enhanced version of the Faster R-CNN model, which demonstrated improved performance in detecting and identifying various parts of tomatoes, achieving a mean average precision (mAP) score of 90.7% for the recognition of tomato flowers, unripened tomatoes, and ripe tomatoes. The optimized model exhibited a noteworthy reduction of approximately 79% in memory requirements, suggesting the use of memory optimization techniques, such as parameter reduction methods or model compression techniques. In another study, Liu et al. <cit.> proposed a novel tomato detection model based on YOLOv3 <cit.>. Their model, which utilized a new bounding mechanism instead of conventional rectangular bounding boxes, enhanced the F1 score by 65%. Zhifeng et al. <cit.> improved the YOLOv3-tiny model for ripe tomato identification, which achieved a 12% improvement over its conventional counterpart in terms of the F1 score. While detection models can identify and localize fruit regions within candidate scans, they often struggle to accurately capture the contours and shapes of the fruits. Segmentation methods can address this limitation by providing detailed information about fruit shapes and sizes through pixel-wise mask output. For instance, as demonstrated by Yu et al. <cit.>, the Mask R-CNN model was employed to successfully identify ripe strawberries, particularly those difficult to distinguish due to overlapping. Similarly, Kang et al. <cit.> employed the Mobile-DasNet model combined with a segmentation network to identify fruits, achieving accuracies of 90% and 82% for the respective tasks. § RELATED WORK In this section, we highlight recent advances in precision agriculture proposed to assist farmers in effectively increasing their crop production, with a particular emphasis on tomatoes <cit.>. To effectively organize the existing literature, we have categorized the methods into two groups: one group focuses on employing conventional techniques to enhance existing agricultural workflows, while the other group leverages modern computer vision schemes to enhance agricultural growth in terms of productivity, disease detection, and monitoring in natural farm environments <cit.> §.§ Traditional Methods in Precision Agriculture: Tomatoes are widely grown crops that have been the focus of many agricultural studies. Traditional approaches to improve tomato harvesting encompass various methods and principles for better management of these fruits against pests and diseases. These methods ultimately enhance the overall agricultural productivity. Moreover, the evolution of these methods over the years has refined the foundation of traditional agricultural practices. Some of the standard methods proposed to improve agricultural workflows include: Crop rotation is a strategic agricultural practice that involvinges the sequential cultivation of different crops across multiple seasons. Its purpose is to mitigate the negative impact of pests and diseases that specifically target certain crops, while simultaneously improving soil fertility and overall crop yield <cit.>. Intercropping is a farming technique that involves cultivating two or more crops together in the same field concurrently <cit.>. This method optimizes land utilization, promotes biodiversity, reduces the incidence of pests and diseases, and enhances soil fertility through nutrient complementarity. Conventional irrigation methods encompass various systems such as flood, furrow, and sprinkler irrigation. These systems ensure a regulated water supply to crops, facilitating their optimal growth and development <cit.> Furthermore, traditional agricultural practices have heavily relied on applying organic fertilizers, including crop residues, compost, and manure, to enhance soil fertility and provide essential nutrients to plants. These natural fertilizers contribute to long-term soil health and foster sustainable agricultural practices <cit.>. Mechanical tillage involves the use of ploughs, harrows, and other machinery to prepare the soil for planting <cit.>. It serves multiple purposes such as weed control, improvement of seedbed conditions, and incorporation of nutrients into the soil. However, it is essential to note that mechanical tillage can also result in soil erosion and degradation. Conventional pest and disease management methods predominantly rely on the application of chemical pesticides and fungicides to control insects, weeds, and plant diseases. These methods aim to safeguard crops from damage and promote optimal growth. However, concerns have been raised regarding their potential adverse impacts on the environment and human health <cit.>. Conventional breeding methods have been practiced for centuries to enhance crop varieties through selective breeding for desirable traits, including yield, disease resistance, and tolerance to environmental stresses. While these methods are still employed in certain regions, they are time-consuming and labor-intensive <cit.>. Consequently, conventional breeding methods often yield lower production and may affect the quality of harvested crops when compared to modern precision agriculture approaches. It is crucial to acknowledge the strengths and limitations of these conventional agricultural practices to explore opportunities for improvement and advancement in the field. §.§ Modern Computer Vision Methods for Precision Agriculture: Deep learning methods have recently attracted a lot of interest and have been increasingly utilized for precise identification of tomato diseases and growth monitoring. In a similar vein, CNNs have also been utilized for tomato fertilization and disease detection <cit.>. These methods, built upon neural networks, are used to analyze large-scale datasets and derive insightful patterns for the precise detection and monitoring of tomatoes. Sherafati et al. <cit.> proposed a framework for assessing the ripeness of tomatoes from RGB images. Sladojevic et al. <cit.> utilized transfer learning to detect and classify tomato diseases. They achieved accurate disease classification by fine-tuning a pre-trained CNN network using a tomato disease dataset. Zheng et al. <cit.> presented a YOLOv4 <cit.> detector to determine tomato ripeness. In contrast, Xu et al. <cit.> utilized Mask R-CNN <cit.> to differentiate between tomato stems and fruit. Rong et al. <cit.> presented a framework based on YOLACT++ <cit.> for tomato identification. However, this model was unable to determine the ripeness of the tomatoes due to the limited capability of the YOLACT++ framework in capturing and analyzing color and textural features indicative of tomato ripeness. The YOLACT++ model primarily focuses on instance segmentation and object detection tasks, without incorporating specific features or mechanisms to assess the ripeness of the tomatoes. As a result, the model's performance in accurately determining the ripeness level of the tomatoes was not satisfactory. Incorporating of semantic or instance segmentation models in agriculture has the potential to revolutionize the way crops are assessed and harvested. While segmentation tasks are intricate to perform, they offer the ability to identify objects and extract their semantic information at the pixel level. Such capabilities have become increasingly important for robots used in crop harvesting, where the first step is to detect, classify, and segment crops using computer vision methods <cit.>. For example, Liu et al. <cit.> employed UNet <cit.> to extract maize tassel. The authors achieved a high accuracy of 98.10% and demonstrated the potential of using semantic segmentation for plant phenotyping. Moreover, various studies have shown that the use of transformer models, such as ViT <cit.>, has mproved recognition of crops <cit.>. Likewise, transformers-based detection models have shown promising results towards leaf disease detection and assessing the appearance quality of crops such as strawberries <cit.>. Chen et al. <cit.> used a Swin transformer <cit.> for detecting and counting wine grape bunch clusters in a non-destructive and efficient manner.Remarkably, their proposed approach achieved high recognition accuracy even in partial occlusions and overlapping fruit clusters. Utilizing advanced computer vision techniques in agriculture can significantly enhance the effectiveness and precision of crop assessment and harvesting, ultimately boosting productivity and sustainability within the industry <cit.>. § MOTIVATION & CONTRIBUTIONS Targeted fruit harvesting refers to the selective picking of ripe fruits, a task that is complex due to the unpredictable nature of crops and outdoor conditions. A vital example of this complexity is seen with tomatoes. They are a staple food crop widely grown worldwide but present a unique segmentation challenge due to their occlusion with leaves and stems, making it difficult to determine their ripeness. To tackle this issue, a recent research study has been undertaken, presenting a novel framework for the real-time segmentation of tomatoes and determination of their maturity levels under diverse lighting and occlusion conditions. The primary objective of the authors is to automate the process of tomato harvesting, potentially resulting in enhanced efficiency and reduced agricultural expenses. In addition to improving the harvesting process, accurately assessing tomato ripeness at the pixel level could also have other benefits. For example, it may allow for more precise sorting and grading of tomatoes, resulting in higher-quality final products. This could be particularly important for producers who export their tomatoes to different markets, as quality standards vary widely among countries. In summary, this research has the potential to bring about a paradigm shift in the harvesting and grading of tomatoes, which could have profound implications for the agricultural sector. By enhancing productivity and implementing stringent quality control measures, farmers may have the opportunity to boost their profitability while satisfying the increasing market demand for premium, environmentally friendly agricultural products. The main contributions of this study are outlined below: * The proposed approach provides a modular feature extraction and decoding method that separates the segmentation architecture, commonly referred to as the "meta-architecture," as illustrated in Figure <ref>. * The proposed model possesses the capability to accurately detect and predict the presence of tomatoes in images captured under various lighting conditions, as well as in real-time images captured from indoor greenhouse farms. * The proposed model is constrained via the L_t loss function, which enables it to extract tomato regions from candidate scans that depict a wide variety of textural, contextual, and semantic differences. Moreover, the L_t loss function also ensures that the proposed model, at the inference stage, can objectively recognize different maturity stages of the tomatoes, irrespective of the scan attributes, for their effective cultivation. * The proposed trained model is highly versatile and can be integrated into a mobile robot system designed for greenhouse farming. This integration would enable the robot to accurately detect and identify the maturity level of tomatoes in real-time, which could significantly improve the efficiency and productivity of the farming process. This model could help farmers automate the tomato harvesting process, potentially saving time and resource and reducing in the costs associated with manual harvesting. The integration of this model into a mobile robot system could revolutionize how tomatoes are grown and harvested in greenhouse farms, paving the way for more sustainable and cost-effective farming practices. The remainder of the paper is organized as follows: Section IV delivers an in-depth discussion of the proposed method. Section V offers insights into the datasets and experimental procedures utilized. Section VI covers the evaluation results, while Section VII delves into a detailed discussion of the proposed framework. Finally, Section VIII concludes the paper. § METHODOLOGY This paper presents a novel segmentation approach to extract and grade tomato maturity levels using RGB scans which are acquired under various lighting and occlusion conditions. Upon understanding the textures of the tomato plant, the proposed framework isolates the critical parts of the tomato fruit, such as the color, shape, and size of tomatoes. The block diagram of the proposed framework is shown in Figure <ref>, where we can observe that it is composed of encoder, transformer, and decoder blocks. Initially, the input scan is passed to the encoder and transformer blocks. At the encoder block, the latent feature representations are computed from the input scan using the residual and shape preservation blocks. Similarly, at the transformer end, the input scan is divided into n number of image patches, against which n positional embeddings are computed. These positional embeddings and linear projections of the image patches are combined and are passed to the t-layered transformer block to generate the projectional features via a contextual multi-head self-attention mechanism in order to differentiate between different tomato grades. Finally, the decoder block removes extraneous elements through rescaling and max un-pooling operations, resulting in accurate segmentation and grading of tomato maturity levels. The subsequent sections provide a comprehensive overview of each block within the proposed framework: Transformer Block: The proposed model incorporates a transformer block composed of t encoders. Empirically, t is set to 3, giving rise to encoders T-1, T-2, and T-3, which are cascaded together to generate p_t. Initially, the input image x is partitioned into non-overlapping, square-shaped patches denoted by x^pϵR^P^x^P^x^C_h, where P indicates the resolution of x^p determined by the equation P=√(RCx/n_p). Here, n_p represents the total number of patches. The positional embeddings x^e_i corresponding to patch x^p_i are then generated, i.e., x^eϵR^P^x^P^x^C_h. Subsequently, the flattened projections, i.e., f_p(x^e_i), are computed. In a similar manner, the linear projection for patch x^p_i, denoted as l_t(x^p_i), is obtained. Both f_p(x^e_i) and l_t(x^p_i) are resized to l dimensions, and the sequenced embeddings for patch x^p_i are computed by adding l_t(x^p_i) to f_p(x^e_i), i.e., q_i = l_t(x^p_i ) + f_p(x^e_i). By repeating this process for all the n_p patches, the combined projections, q^o, are generated, expressed as follows: q^o=[l_t(x^p_0);(l_t(x^p_1)); . . .;l_t(x^p_n_p_-1)] + [f_p(x^e_0); f_p(x^e_1); ... ; f_p(x^e_n_p_-1)], Or q^o = [q_0; q_1; ... ; q_n_p_-1)]: This process allows the model to capture spatial information from the image and create a representation that the transformer block can further process. The next step involves passing the combined projections q^o to T-1, where each head j normalises q^o_j to produce q_j^o. Then, q_j^o is decomposed into a query (Q_j), key (K_j), and value (V_j) pairs using learn-able weights, with Q_j = q_j^ow_q, K = q_j^ow_k, and V = q_j^ow_v. The contextual self-attention at head j (i.e., A_j) is then computed by combining Q_j and K_j through scaled dot product, and their resulting scores are merged with V_j. This computation is expressed below: A_j(q_j^o; Q_j,K_j,V_j )=σ(Q_jK^T_j/√(l))V_j, The soft-max function σ is applied element-wise to the output of the scaled dot product in each head. Furthermore, the contextual self-attention maps from all the leaders are concatenated to produce the contextual multi-head self-attention distribution φ CMSA(q́^o), which is given by: φ CMSA(q́^o) = [A_0(q́^o_j; Q_0; K_0; V_0); A_1(q́^o_j; Q_0; K_0; V_0);.......; A_h-1(q́_j^h-1; Q_h-1; K_h-1; V_h-1)] This process enables the model to capture relationships and dependencies within the input patches. In addition to this, the contextual multi-head self-attention distribution CMSA(q^o) is combined with q^o, and the resulting embeddings are normalised and fed into the normalised feedforward block, which generates the T-1 latent projections (p_T1). p_T1 = ϕ f((φ CMSA(q́^0) + q^o) ) + (φ CMSA (q́^0) + qo) This process aims to generate more powerful and informative representations of the input data, which subsequent components in the model can further process. After applying the learnable feed-forward function ϕ f(:), the resultant embeddings are normalised and passed through the normalised feedforward block to generate T-1 latent projections (p_T1). These projections are then passed to T-2, which produces p_T2 similarly. p_T2 is then passed to the T-3 encoder, which generates p_T3 projections. Here, p_t = p_T3. These projections are fused with f_e to produce f_d. Finally, f_d is passed to the decoder block to extract the instances of tomato objects. Encoder Block: The encoder block in E is responsible for creating the latent feature distribution f_e(x) from the input tomato images xϵℝ^RxCXC_h, where R represents rows, C represents columns, and C_h represents channels of x. Unlike traditional pre-trained networks, E's encoder comprises five levels. (E-1 to E-5), each with three to four shape preservation and residual blocks. These blocks enable the encoder to generate accurate contextual and semantic representations of the desired items. during scan decomposition, producing distinct feature maps. The encoder consists of 11 shape preservation blocks (SPBs) and 5 residual blocks (RBs), each with four convolutions, four batch normalisations (BNs), two ReLUs for SPBs, and three convolutions, three BNs, two ReLUs, and one max pooling for RBs. The encoder's learned latent features (f_e), after being fine-tuned, are effective in distinguishing the maturity level of one tomato from another. However, they may also produce false positives when differentiating between occluded regions of tomato objects, as their features are highly correlated. To mitigate this issue, we convolve f_e with the transformer projections p_t to enhance the distinction of inter-class distributions. The resulting fused feature representations f_d = f_e * p_t amplify the similarities between f_e and p_t while suppressing heterogeneous representations, leading to a significant reduction in the number of false positives. f_e is convolved with p_t to produce fused feature representations f_d, which are then forwarded to the decoder. These fused features are then passed to the decoder end to reconstruct the input image with segmented tomatoes as output. Decoder Block: The decoder block is composed of several components that work together to perform the segmentation of tomato objects. It consists of 11 maximum unpooling layers, 5 rescaling layers, and a softmax layer. The unpooling layer plays a crucial role in recovering the spatial information lost during the encoding process. These layers help restore the original size and shape of the segmented objects. Each rescaling layer is equipped with a convolutional layer, batch normalization, and ReLU activation. To address the degradation problem that can occur during the segmentation of tomato objects, skip connections are also established between the encoder and decoder blocks. These connections enable the flow of information from earlier layers in the network to later layers. By doing so, the network can utilize low-level features from the encoder to refine and enhance the segmentation results in the decoder. Following the successful segmentation process, a softmax layer is applied. This layer assigns each pixel in the segmented image to one of the tomato object categories based on its estimated maturity level. The softmax function computes the probability distribution over the categories, ensuring that each pixel is assigned to the most appropriate category. In conclusion, the proposed framework leverages the strengths of the encoder, transformer, and decoder blocks to achieve precise segmentation and grading of tomato maturity levels. The model efficiently collects spatial information, captures relationships among input patches, and enhances the differentiation between different tomato grades by utilizing learned latent features, contextual multi-head self-attention processes, and feature representation fusion. The decoder block refines the segmentation results and generates precise classifications with its unpooling layers, rescaling layers, and skip connections. Proposed L_t Loss Function: During the training phase, the model is constrained by the proposed loss function, referred to as L_t, which identifies and extracts tomato objects from input images. The L_t loss function comprises two components: L_s1 and L_s2. By integrating these sub-objectives into the loss function, the model can be trained and subjected to a more extensive array of potential network defects. This approach proves particularly useful when dealing with an imbalanced distribution of background and foreground pixels in the input scan, as it often leads to defect regions being significantly smaller compared to the background region. In such cases, L_s1 effectively minimises errors at the pixel level, enabling the model to perform segmentation tasks despite the imbalanced distribution of pixels. However, attaining convergence through L_s1 presents challenges due to the possibility of the gradient of L_s1 to overshoot when the predicted logits and ground truths have smaller values. To mitigate this issue, L_s2 is introduced into the L_t loss function, allowing the model to converge even when dealing with smaller values of predicted logits and ground truths. Moreover, the balance between L_s1 and L_s2 within L_t is controlled by the hyperparameters β_1 and β_2. Mathematically, the objective functions can be expressed as follows: L_t = β_1 L_s1 + β_2 L_s2, where L_s1 = 1/b_s∑_i=0^b_s-1(1 - 2 ∑_j=0^c_se-1 T^se_i,j p(ℒ_i,j^se, τ)/∑_j=0^c_se-1( (T^se_i,j)^2 + p(ℒ^se,τ_i,j)^2 ) ), and L_s2 = -1/b_s∑_i=0^b_s-1∑_j=0^c_se-1 T_i,j^selog(p(ℒ_i,j^se, τ)). The notation used in the context is as follows: T^sei,j denotes the ground truth label for the i^th sample belonging to the j^th tomato classes, namely full ripe, half ripe, and green. p(ℒi,j^se, τ) indicates the predicted probability distribution obtained from the output logit ℒi,j^se, τ for the i^th sample and j^th net defects category. This probability distribution is generated using the softmax function, and τ is a temperature constant used to soften the probabilities, ensuring robust learning of tomato classes. b_s signifies the batch size. cse stands for the total number of classes, which corresponds to the different tomato maturity levels considered. § EXPERIMENTAL ANALYSIS The proposed framework was put to the test using a dataset from a nearby greenhouse farm in Ajban, Abu Dhabi, UAE. The dataset comprises time-linked frames that can be employed to identify tomatoes at different maturity levels. Since segmentation networks necessitate pixel-level annotations, we manually crafted ground truths using the Image Labeler App of MATLAB 2022b. Specifically, we annotated approximately 660 images of the three tomato maturity levels, and Table <ref> indicates the number of occurrences of each class in this dataset. To ensure model robustness, 75% of the annotated images were used for training, whereas the remaining 25% was allocated for validation and testing. During the training phase, the number of epochs and the batch size were set to 200 and 16, respectively. After each epoch, the trained model was evaluated against the validation dataset. To assess the effectiveness of the proposed method, numerous experiments were carried out. One of these experiments involved using a test set to assess the model's ability to make accurate predictions under various lighting conditions, occlusion levels, and viewing angles. Additionally, a detection and segmentation experiment was executed in the Ajban greenhouse in Abu Dhabi, United Arab Emirates, to evaluate the model's performance in a real-world harvesting setting as demonstrated in Figure <ref>. The quality of segmentation was evaluated by calculating the Dice coefficient and the mean intersection over union (mIoU). These metrics were employed to assess the accuracy and overlap between the predicted segmentation masks and the ground truth annotations. Both the Dice coefficient and mean IoU are valuable metrics in gauging the performance and quality of segmentation algorithms, offering complementary insights into the correctness and overlap of the segmentation results. §.§ DATASET This study employs three datasets, namely KUTomaData, Laboro Tomato, and Rob2Pheno. Detailed explanations of each dataset are provided below. * KUTomaData: We collected data from greenhouses in Al Ajban, Abu Dhabi, United Arab Emirates and have named it KUTomaData. This dataset consists of approximately 660 images. The participants used mobile phone cameras to capture imagery from these greenhouses. The dataset encompasses three distinct types of tomatoes: green, half-ripe, and fully ripe. The authors included images with varying hues, textures, and occlusion backdrops to ensure that the dataset accurately mirrored real-world conditions. The backgrounds of the images feature varying densities and hues of tomatoes and leaves, which contribute to the dataset's complexity. Challenging factors such as complex environments, different lighting conditions, occlusion, and variations in tomato maturity levels and densities were deliberately incorporated to ensure the dataset accurately represents real-world situations. Furthermore, there were intra-class variances pertaining to the tomatoes' colour, texture, and shape. The images presented in Figure <ref> provide a glimpse of the complexity of the dataset, with intricate backdrops for each tomato category and diverse illuminations and stages in most images. This comprehensive and challenging dataset is suitable for training and testing the model's performance under realistic conditions. We also used two other datasets, namely Laboro Tomato: Instance Segmentation <cit.> and Rob2Pheno Annotated Tomato <cit.> for our proposed model evaluation. * Laboro Tomato: Instance Segmentation <cit.>: The Laboro Tomato dataset is a collection of images that showcases the growth stages of tomatoes during their ripening process. This dataset is designed specifically for tasks related to object detection and instance segmentation. The dataset includes two subsets of tomatoes categorized based on their size. The images in the dataset were captured using two separate cameras, each with its unique resolution and image quality, at a local farm. Each tomato in the dataset is further divided into categories based on both its size (normal size or cherry tomato) and the stage of ripening. The ripening stages are classified into three categories: Fully Ripened: This category represents tomatoes that have reached their optimal ripeness and are ready to be harvested. They exhibit a uniform red colouration, with at least 90% of the tomato's surface filled with red colour. Half Ripened: Tomatoes in this category are in a transitional ripening stage. They appear greenish and require more time to ripen fully. Typically, these tomatoes are red on 30% to 89% of their surface. Un-ripened: This category encompasses tomatoes that are in the early stages of ripening. They are predominantly green or white, with occasional small patches of red. These tomatoes have less than 30% of their surface filled with red colour. The number of training and testing images for this dataset is 743 and 262, respectively. This dataset serves as a valuable resource for tackling real-life challenges by leveraging a combination of different technologies. * Rob2Pheno Annotated Tomato <cit.>: The work conducted by Afonso, Manya, et al. (2020) <cit.> employed the aforementioned dataset for their research on tomato fruit detection and counting in greenhouses using deep learning. This dataset collects RGB-D images of tomato plants captured in a production greenhouse. The images were obtained using Real-sense cameras, capable of capturing both colour information and depth data. This additional depth information provides a three-dimensional perspective of the scene. The dataset also includes object instance-level ground truth annotations of the fruit. These annotations mark the precise location and boundaries of tomato fruits within the images. These annotations are useful for training and evaluating object detection models such as MaskRCNN or YOLACT, which are popular deep-learning algorithms used for detecting and segmenting objects in images. This dataset contains 710 and 284 images for training and testing purposes, respectively, after applying data augmentation methods. §.§ DATA AUGMENTATION In order to achieve high accuracy in predicting ground truth labels, Deep Convolutional Neural Network (DCNN) models typically require a substantial number of training images. However, there are instances where certain classes may have limited images, posing a challenge in effectively training the model. To tackle this issue, data augmentation techniques are employed to augment the available images and expand the training dataset. In our study, we employed data augmentation techniques, as described in <cit.>, to generate additional variations from the existing images for classes with limited samples, particularly for maturity-level classes. These augmentation techniques include blurriness, rotation, horizontal and vertical flipping, horizontal and vertical shearing, and adding noise. Figure <ref> illustrates an example of image augmentation. By incorporating this technique, we increased the number of images in our dataset, thereby enhancing the model's robustness during the training phase of the CNN. §.§ TRAINING The suggested framework has been trained on a system comprising a Core i9-10940 processor running at 3.30 GHz, with 128GB of RAM, and a single NVIDIA Quadro RTX 6000 GPU. The GPU is equipped with the CUDA toolkit version 11.0 and cuDNN version 7.5. The development of the proposed model was carried out using Python 3.7.9 and TensorFlow 2.1.0. During the training process, the model was trained for 200 epochs, with each epoch consisting of 512 iterations. The ADADELTA optimizer was employed, utilizing default values for the learning rate (1.00) and decay rate (0.95). § RESULT ANALYSIS Within this section, we delve into the experimental results obtained from applying the proposed framework to various datasets. Furthermore, we present an ablation study that focuses on identifying the optimal hyperparameters and backbone networks for achieving the most favourable results. This study aims to analyze the impact of different configurations and choices on the performance of the proposed framework. Overall, this section provides a comprehensive analysis of the proposed framework and its performance across different datasets. §.§ Ablation Studies To enhance the model's effectiveness, a series of ablation experiments were carried out. The first set of ablation experiments focused on identifying optimal β parameters that result in producing the best recognition performance of the proposed framework. The second set of experiments aimed to identify the optimal network backbone. Several backbone architectures were evaluated and compared to discern the architecture that yielded maximum accuracy and quality segmentation. The objective of the third series of experiments was to determine the optimal value for the parameter τ. By varying τ and evaluating the model's performance, we established the threshold that maximized detection accuracy while minimizing false positives and negatives. summarize fourth ablation experiments here related to the optimal loss function. The fourth ablation experiment aimed to identify the optimal loss function for the proposed model by comparing it to other state-of-the-art loss functions, including soft nearest neighbor loss, focal Tversky loss, dice-entropy loss, and conventional cross-entropy loss. The fifth series of ablation experiments were related to comparing the segmentation performance of the proposed model against state-of-the-art networks. §.§.§ Optimal The first set of ablation experiments aimed to determine the optimal hyper-parameters β_1,2 in the L_t loss function, which would result in the best segmentation performance across different datasets. To explore this, we varied the value of β_1 from 0.1 to 0.9 in increments of 0.2. For each β_1 value, we calculated β_2 as β_2=1-β_1. Subsequently, the proposed model was trained using each combination of β_1 and β_2. During the inference stage, the model's segmentation performance for each combination was evaluated across the datasets, utilizing mAP scores as the evaluation metric (as shown in Table <ref>). The results revealed that the proposed framework performs better when assigning a higher weight to β_1, particularly with a value of 0.9 in this specific instance. For example, with β_1=0.9 and β_2=0.1, the proposed model achieved mAP scores of 0.5814, 0.6542, and 0.6639 across the three datasets: KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato respectively. Based on these findings, a combination of β_1=0.9 and β_2=0.2 was selected for subsequent experiments to train the proposed model. This choice of hyperparameters was deemed optimal based on the earlier evaluations and resulted in favorable model performance. §.§.§ Optimal Encoder Backbone The second set of ablation experiments aimed to determine the optimal network backbone for segmenting and detecting tomato objects. To achieve this, we integrated various pre-trained models, including HRNet <cit.>, Lite-HRNet <cit.>, EfficientNet-B4 <cit.>, DenseNet-201 <cit.>, and ResNet-101 <cit.>, into the proposed model. We then compared their performance against the proposed backbone specifically designed for tomato object detection and segmentation. The results obtained from the conducted experiments are displayed in the provided Table <ref>. Upon examining Table <ref>, the proposed encoder outperformed the state-of-the-art models, surpassing them by 3.22%, 2.51%, 3.67%, and 0.56% in terms of μIoU, μDC, mAP, and AUC scores, respectively, on the KUTomaData dataset. Moreover, when considering the Laboro dataset, the proposed framework exhibited performance improvements of 2.27%, 1.60%, 2.61%, and 2.25% in terms of μIoU, μDC, mAP, and AUC scores, respectively. Similarly, on the Rob2Pheno dataset, the proposed model achieved gains of 3.68%, 2.50%, 3.71%, and 1.25% in μIoU, μDC, mAP, and AUC scores, respectively. These notable performance improvements can be attributed to utilizing a novel butterfly structure in the proposed encoder backbone, incorporating distinctive SPB, IB, and HDB blocks. The model acquires the ability to extract distinctive latent characteristics from the input images by adding this integration, resulting in improved performance in tomato object segmentation and classification tasks. This advancement outperforms the capabilities of current cutting-edge models, such as HRNet <cit.>, Lite-HRNet <cit.>, EfficientNet-B4 <cit.>, DenseNet-201 <cit.>, and ResNet-101 <cit.>. It is important to note that while the proposed scheme is computationally expensive compared to Lite-HRNet <cit.>, its superior detection performance justified its selection for generating distinct feature representations in the subsequent experiments. This decision was driven by the primary objective of achieving the highest possible detection performance. §.§.§ Determining the Optimal Temperature Constant In the proposed L_t loss function, the temperature constant (τ) serves as a hyperparameter that plays a role in softening the target probabilities. Using a higher value of τ, the model becomes more receptive to recognising tomato object segmentation and detection regardless of the input imagery characteristics. This softening effect enhances the detection and segmentation performance by enabling the model to comprehend the target probabilities more broadly. In the fourth set of ablation experiments, we aimed to determine the optimal value for τ to extract tomato objects accurately. To achieve this, we varied the value of τ from 1 to 2.5 in increments of 0.5 within the L_t loss function during the training of the proposed model across each dataset. After completing the training process, in the inference stage, we assessed the performance of the proposed framework in tomato object segmentation and detection on each dataset. The outcomes of these evaluations are showcased in the provided table <ref>. From Table <ref>, it can be observed that increasing the value of τ from 1 to 1.5 led to a significant performance boost across all four datasets. For instance, on the KUTomaData dataset, the proposed framework achieved performance improvements of 4.12% in terms of μIoU, 3.21% in terms of μDC, 1.65% in terms of mAP, and 1.25% in terms of AUC scores. Similarly, on the Laboro dataset, it achieved performance improvements of 2.87% in μIoU, 2.03% in μDC, 3.58% in mAP, and 1.88% in AUC scores. Furthermore, experiments on the Rob2Pheno Annotated dataset showed performance improvements of 1.88% in μIoU, 1.26% in μDC, 2.12% in mAP, and 1.85% in AUC scores. It is important to note that increasing τ does not always result in performance improvements. When we increased the value of τ from 1.5 to 2 and from 2 to 2.5, the proposed framework's effectiveness deteriorated. This decline in performance can be attributed to the fact that when τ exceeds a certain threshold, it loses its ability to accurately differentiate between logits representing different categories, such as green, half-ripen and fully-ripen and the background, within the input imagery. Considering the optimal detection results achieved with τ=1.5 for the proposed framework on each dataset, we chose to train the model with τ=1.5 for the remaining experiments. This selection ensures consistent and effective performance throughout the subsequent experimentation. §.§.§ Optimal Loss Function The fifth set of ablation experiments focused on analysing the performance of the proposed model when trained using the L_t loss function compared to other state-of-the-art loss functions. These include the soft nearest neighbor loss function (L_sn) <cit.>, the focal Tversky loss function (L_ft) <cit.>, the dice-entropy loss function (L_de) <cit.>, and the conventional cross-entropy loss function (L_ce). The results of these experiments are summarised in Table <ref>. From Table <ref>, it is evident that the proposed model, trained using the L_t loss function, outperformed its counterparts trained with state-of-the-art loss functions across all datasets. For instance, on the KUTomaData dataset, the L_t loss function resulted in a performance improvement of 2.16% in terms of μIoU, 1.66% in terms of μDC, 2.39% in terms of mAP, and 2.25% in terms of AUC scores. Similarly, on the Laboro dataset, the L_t loss function led to a performance improvement of 3.25% in terms of μIoU, 2.30% in terms of μDC, 5.58% in terms of mAP, and 4.81% in terms of AUC scores. Furthermore, on the Rob2Pheno Annotated dataset, it yielded a performance improvement of 1.23% in terms of μIoU, 0.82% in terms of μDC, 1.16% in terms of mAP, and 1.45% in terms of AUC scores. These performance improvements can be attributed to the fact that the proposed L_t loss function leverages both contextual and semantic differences within the underwater scans, allowing the model to recognise tomato objects regardless of input image characteristics effectively. Consequently, for the remaining experiments, we employed the L_t loss function to train the proposed model for tomato object extraction across all four datasets. §.§.§ Choice of Segmentation Network: This section will present the outcomes of utilising the proposed model with a Resnet-50 backbone. Although the model was specifically designed to detect tomatoes in an occluded environment using a customised backbone, it is versatile enough to be implemented with other CNN backbones such as ResNet <cit.>, SegFormer <cit.>, PSPNet <cit.>, SegNet <cit.>,and UNet <cit.>. Figure <ref> shows tomatoes in a cluttered and occluded environment where the difficulty lies in detecting the unripened tomatoes within same-coloured leaves. This presents a scenario where mobile robots can capture the image and identify the tomatoes. We thoroughly assess the proposed framework on the collected dataset. Furthermore, we also report its comparative evaluation with state-of-the-art segmentation models. Figure <ref> shows the cluttered situation in an indoor greenhouse where multiple tomato vines can be seen. Moreover, the qualitative evaluation of the proposed architecture and its comparison with the state-of-the-art segmentation models (such as SegFormer <cit.>, PSPNet <cit.>, SegNet <cit.> and U-Net <cit.>) on the dataset is presented in Figure <ref>. Table <ref> represents the quantitative performance of the proposed framework compared to the state-of-the-art networks. It can be seen that the proposed model outperforms the other models in terms of evaluation metrics. The proposed incremental instance segmentation scheme was evaluated by incorporating various popular transformer, scene parsing, encoder-decoder, and fully convolutional-based models, such as SegFormer <cit.>, SegNet <cit.>, U-Net <cit.>, and PSPNet <cit.>, as backbones alongside the proposed convolutional transformer. As shown in Table <ref>, this table compares the performance metrics of four different models (Proposed (Our), SegFormer, SegNet, U-Net, and PSPNet) on the task of tomato segmentation. The compared metrics are F1 Score, Dice Coefficient, mean Intersection over Union (IoU), and class-wise IoU. The Dice Coefficient is a statistical measure of the overlap between two sets of data - in this case, the predicted and actual tomato segmentation masks. The Mean IoU measures how well the model can accurately segment the tomato regions in the images. The class-wise IoU shows the IoU score for each of the three tomato ripeness classes: Unripe, Half-Ripe, and Fully-Ripe. The results show that the proposed model outperforms other models in all metrics, achieving a Dice coefficient of 0.7326, and a mean IoU of 0.6641. The proposed model also achieves higher class-wise IoU scores for all three tomato ripeness classes, indicating that it is better at accurately segmenting each class. The proposed model outperformed SegFormer, SegNet, U-Net, and PSPNet models across all metrics. The SegNet and U-Net models exhibited significantly poorer performance, achieving the lowest scores in all metrics. Their Dice Coefficients were 0.5728 and 0.5475, and mean IoU values were merely 0.4104 and 0.3769, respectively. The data suggest that the proposed model is highly effective in accurately segmenting tomato regions in images, outperforming other commonly used segmentation models for this particular task. In addition to quantitative evaluations, a qualitative comparison was performed between the proposed convolutional transformer segmentation framework and other existing segmentation models. The results, illustrated in Figure <ref>, demonstrate that while all the examined segmentation models successfully localize tomato data through masks, substantial variation exists in the quality of the generated masks across different methods. Notably, our proposed framework exhibits exceptional accuracy in producing precise tomato masks. Moreover, when considering the extraction of tomatoes at various maturity levels, the capabilities of the proposed convolutional transformer model become evident. Our framework stands out due to its distinctive ability to generate shape-preserving embeddings and to effectively leverage self-attention projections. This unique attribute enables the framework to achieve effective segmentation, even in the presence of occluded tomato data, surpassing the performance of state-of-the-art methods in this domain. §.§ Quantitative Evaluations The table presents a quantitative comparison of different models based on various evaluation metrics for tomato segmentation. These metrics include the Dice Coefficient, Mean IoU (Intersection over Union), and Classwise IoU (IoU for different tomato ripeness classes). The first row represents the proposed model, labelled as "Our," which achieved a remarkably high Dice Coefficient of 0.7326 and a mean IoU of 0.6641. The Classwise IoU values for the "Unripened," "Half-Ripened," and "Fully Ripened" classes are also noteworthy, with IoU scores of 0.7395, 0.6028, and 0.3262, respectively. Comparing the proposed model to other state-of-the-art models, a Dice Coefficient of 0.6602 and a mean IoU of 0.5745. However, its Classwise IoU scores for all three ripeness classes are lower than the proposed model's. The SegNet model obtained a Dice Coefficient of 0.5728 and a mean IoU of 0.4104. Its Classwise IoU scores for the "Unripened" and "Half-Ripened" classes are higher than those of other models, but it performs poorly for the "Fully Ripened" class. The UNet model achieved a Dice Coefficient of 0.5475 and a mean IoU of 0.3769. Similar to SegNet, it demonstrates better performance for the "Unripened" and "Half-Ripened" classes but struggles with the "Fully Ripened" class. Finally, the PSPNet model obtained a Dice Coefficient of 0.5504 and a Mean IoU of 0.3797. Its Classwise IoU scores for the "Unripened" and "Half-Ripened" classes are relatively higher, but it performs poorly for the "Fully Ripened" class. Overall, the proposed model outperforms the other models regarding Dice Coefficient, mean IoU, and Classwise IoU for different tomato ripeness classes. The results highlight the effectiveness and superiority of the proposed model in accurately segmenting tomatoes of varying ripeness levels. §.§ Qualitative Evaluations: Figure <ref> presents a rigorous qualitative assessment of the proposed framework alongside state-of-the-art methods, primarily focusing on the accuracy of tomato segmentation. The objective is to comprehensively evaluate the performance of the proposed framework against existing approaches when dealing with real-world scenarios. In Column (A) of Figure <ref>, the ground truth annotations are visually overlaid on the corresponding actual images. A distinctive color scheme is employed to signify different maturity grades: cyan for fully-ripened tomatoes, pink for half-ripened tomatoes, and yellow for unripe tomatoes. This column is a reliable reference for assessing the expected quality of segmentation. Column (B) showcases the exceptional results of the proposed convolutional transformer model. The segmentation outcomes achieved by the framework demonstrate its remarkable efficacy in accurately classifying and segmenting tomatoes of three maturity grades, even in scenarios with challenging factors such as occlusion and variable lighting conditions. Columns (C) to (H) provide a meticulous comparative analysis of other state-of-the-art methods, namely SETR <cit.>, Segformer <cit.>, DeepFruits <cit.>, COS <cit.>, CWD <cit.>, and DLIS <cit.>. Each column represents a distinct method, illustrating the segmentation results attained by the respective approaches. This thorough evaluation facilitates a meticulous examination and meaningful comparisons of the techniques, leading to the identification of the most effective segmentation model for tomatoes. It's also evident from Figure <ref> that the proposed framework consistently outperforms state-of-the-art methods in accurately extracting tomatoes of different maturity grades. The segmentation results obtained by the proposed method exhibit superior accuracy, robustness, and the ability to precisely classify and delineate fully-riped, half-riped, and unripe tomatoes, even in challenging conditions. Conversely, the qualitative analysis of alternative methods reveals varying performance levels, with specific approaches struggling to accurately delineate the distinct maturity grades. §.§ Limitations In this section, we discuss the limitations associated with the proposed framework and our dataset, along with potential solutions to mitigate them. §.§.§ Limitations of the proposed framework: The first limitation of the framework is its inability to generate small masks for extremely occluded, cluttered, or rarely observed small-sized tomatoes. To address this limitation, a practical approach is to incorporate morphological opening operations as a post-processing step to enhance the quality of small masks. This technique could improve the framework's performance in segmenting such challenging instances. The second limitation of the proposed framework lies in its generation of false masks for highly complex and occluded tomato objects. Although the produced masks are of decent quality and outperform state-of-the-art methods (as demonstrated in Figure <ref>), this limitation could be mitigated by employing more sophisticated segmentation loss functions, such as dice or IoU loss, as objective functions. By utilizing these functions, the model can be constrained to preserve the exact shape of segmented objects, thus reducing the generation of false masks. Finally, the third limitation of the proposed framework is its potential to generate pixel-level false positives. This limitation can be overcome by incorporating morphological blob opening operations as a post-processing step, which can effectively eliminate small false positives and improve the overall accuracy of the framework. While the proposed framework has certain limitations, they can be addressed by integrating appropriate post-processing steps and using more advanced segmentation loss functions during training. By considering these solutions, the framework can enhance its ability to accurately segment occluded, cluttered, and rarely observed objects, establishing itself as a more robust solution for tomato object detection. §.§.§ Limitations of the proposed dataset: Firstly, the tomato dataset may exhibit limited diversity regarding tomato varieties, growth stages, and lighting conditions. This narrow scope of variation poses a potential drawback, as it may result in overfitting the model to the specific characteristics of the dataset. Consequently, the model's ability to generalize to different scenarios could be compromised. Secondly, the tomato dataset may contain minor annotation errors, such as inaccurate masking of tomatoes or mislabeling of instances. These errors can have a detrimental effect on the model's performance, making it challenging to achieve high accuracy. To mitigate this limitation, it is essential to thoroughly evaluate and validate all labelled data before utilizing it for training the proposed model. Lastly, the proposed dataset may primarily cover a specific domain, such as a greenhouse, and may not be suitable for applications in other open-field testing scenarios. This limited domain coverage can restrict the applicability of models trained solely on this dataset. To address this limitation, it is advisable to incorporate open-field data during the training process to ensure the models are more adaptable to diverse environments. By acknowledging and addressing these limitations, we can enhance the quality and applicability of the dataset, ultimately facilitating the development of more robust and versatile models for tomato object detection and segmentation. § DISCUSSION The proposed framework presents a novel approach for tomato maturity level segmentation and classification using RGB scans acquired under various lighting and occlusion conditions. The experimental analysis demonstrates the framework's effectiveness in segmenting and grading tomatoes based on color, shape, and size. The proposed framework addresses the challenges associated with harvesting ripe tomatoes using mobile robots in real-world scenarios. These challenges include occlusion caused by leaves and branches and the color similarity between tomatoes and the surrounding foliage during the fruit development stage. The existing literature lacks a sufficient explanation of these tomato recognition challenges, necessitating the development of new approaches. To overcome these challenges, a novel framework is introduced in this paper, leveraging a convolutional transformer architecture for autonomous tomato recognition and grading. The framework is designed to handle tomatoes with varying occlusion levels, lighting conditions, and ripeness stages. It offers a promising solution for efficient tomato harvesting in complex and diverse natural environments. A key contribution of this work is the introduction of the KUTomaData dataset, specifically curated for training deep learning models for tomato segmentation and classification. KUTomaData comprises images collected from greenhouses across the UAE. The dataset encompasses diverse lighting conditions, viewing perspectives, and camera sensors, making it unique compared to existing datasets. The availability of KUTomaData fills a gap in the deep learning community by providing a dedicated resource for tomato-related research. The proposed framework's performance was evaluated against two additional public datasets: Laboro Tomato and Rob2Pheno Annotated Tomato. These datasets were used to benchmark the framework's ability to extract cluttered and occluded tomato instances from RGB scans, comparing its performance against state-of-the-art models. The evaluation results demonstrated exceptional performance, with the proposed framework outperforming the state-of-the-art models, including SETR <cit.>, Segformer <cit.>, DeepFruits <cit.>, COS <cit.>, CWD <cit.>, and DLIS <cit.>, by a significant margin. A series of ablation experiments were conducted to enhance the model's effectiveness. The initial experiments focused on optimizing hyperparameters to improve performance. Subsequently, different network backbones were compared in the second set of experiments to identify the architecture that achieved accurate and high-quality segmentation. The fourth set of experiments determined the optimal value for the parameter τ, balancing detection accuracy and minimizing false positives and negatives. The fifth set of investigations comprehensively evaluated the proposed model's performance, considering accuracy, segmentation quality, computational efficiency, and robustness in challenging scenarios. The initial ablation experiments aimed to find the optimal hyperparameters β_1,2 in the L_t loss function for achieving the best segmentation performance across different datasets. Varying β_1 from 0.1 to 0.9 and calculating β_2=1-β_1, the model was trained and evaluated using different combinations of these values. The results demonstrated that assigning a higher weight to β_1, particularly 0.9, led to superior performance. For example, with β_1=0.9 and β_2=0.1, the model achieved high mAP scores on the KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato datasets. Based on these findings, the combination of β_1=0.9 and β_2=0.2 was chosen as the optimal hyperparameter choice for subsequent model training, resulting in favorable performance. Various pre-trained models were integrated into the proposed framework for tomato object segmentation and detection in the ablation experiments and backbone analysis. The performance of these models was compared against the proposed backbone, designed explicitly for this task. The results, summarized in Table <ref>, clearly demonstrate the superiority of the proposed encoder backbone. Compared to state-of-the-art models such as HRNet, Lite-HRNet, EfficientNet-B4, DenseNet-201, and ResNet-101, the proposed backbone achieved notable improvements across different evaluation metrics. On the KUTomaData dataset, it outperformed existing models by 3.22%, 2.51%, 3.67%, and 0.56% in terms of μIoU, μDC, mAP, and AUC scores, respectively. Similar performance gains were observed on the Laboro and Rob2Pheno datasets, with improvements ranging from 1.60% to 3.68% in various evaluation metrics. These significant improvements can be attributed to integrating a novel butterfly structure in the encoder backbone, incorporating distinctive SPB, IB, and HDB blocks. This integration enables the model to extract unique latent characteristics from input images, leading to improved performance in tomato object segmentation and classification tasks. Despite the higher computational cost compared to Lite-HRNet, the selection of the proposed scheme was justified by its superior detection performance. The primary objective of achieving the highest possible detection performance drove this decision. Integrating the butterfly structure and distinctive blocks enables the model to capture essential features and accurate tomato object delineation. The proposed L_t loss function incorporates a temperature constant (τ) as a hyperparameter to soften target probabilities, improving tomato object segmentation and detection. Adjusting τ makes the model more receptive to recognizing tomato objects, independent of input imagery characteristics. This softening effect allows the model to comprehend target probabilities better, resulting in enhanced performance. Varying τ from 1 to 2.5 during training, experiments revealed that increasing τ from 1 to 1.5 led to significant performance improvements across datasets. For instance, on the KUTomaData dataset, improvements of 4.12% in μIoU, 3.21% in μDC, 1.65% in mAP, and 1.25% in AUC scores were achieved. However, performance declined when τ exceeded 1.5, indicating a reduced ability to differentiate between object categories. Based on optimal results with τ=1.5, subsequent experiments used this value to strike a balance between model receptiveness and accurate classification and segmentation of tomato objects. In the fifth set of experiments, conducted with the optimal loss function, the performance of the proposed model trained with the L_t loss function was compared against other state-of-the-art loss functions, including L_sn, L_ft, L_de, and L_ce. The results, summarized in Table <ref>, clearly demonstrated the superiority of the proposed model trained with the L_t loss function across all datasets. On the KUTomaData dataset, the L_t loss function achieved improvements of 2.16% in μIoU, 1.66% in μDC, 2.39% in mAP, and 2.25% in AUC scores compared to other loss functions. Similarly, on the Laboro dataset, the L_t loss function outperformed the alternatives, resulting in enhancements of 3.25% in μIoU, 2.30% in μDC, 5.58% in mAP, and 4.81% in AUC scores. Furthermore, on the Rob2Pheno Annotated dataset, the L_t loss function delivered improvements of 1.23% in μIoU, 0.82% in μDC, 1.16% in mAP, and 1.45% in AUC scores confirm if these percentage of improvement are correct. Overall, the proposed framework demonstrates promising results in segmenting and grading tomatoes based on their maturity levels. The experimental analysis validates the effectiveness of the proposed method and highlights its superiority over existing approaches. The framework's robustness to various challenging scenarios and its computational efficiency makes it a valuable tool for assessing tomato quality in greenhouse farming. § CONCLUSION This study introduces a novel convolutional transformer-based segmentation and a new dataset of tomato images obtained from greenhouse farms in Al Ajban, Abu Dhabi, UAE. The KUTomaData dataset encompasses images captured under different environmental conditions, including varying light conditions, weather patterns, and stages of plant growth. These factors introduce complexity and challenges for segmentation models in accurately identifying and distinguishing different components of tomato plants. The availability of such a dataset is crucial for developing more precise segmentation models in the robotic harvesting industry, aiming to enhance field efficiency and productivity. We qualitatively assessed and compared our proposed architecture with SETR <cit.>, SegFormer <cit.>, DeepFruits <cit.>, COS <cit.>, CWD <cit.> and DLIS <cit.>. The results demonstrate the superiority of the proposed model across all metrics. It outperformed in terms of μIoU, μDC, mAP, and AUC across the KUTomaData, Laboro and Rob2Pheno datasets. The results are presented in Table <ref>. Moreover, the proposed model exhibits higher class-wise IoU scores for all three tomato ripeness classes, indicating its effectiveness in accurately segmenting each class. This work contributes substantially to the computer vision and machine learning community by providing a new dataset that facilitates developing and testing segmentation models specifically designed for agricultural purposes. Furthermore, it emphasizes the importance of ongoing research and progress in precision agriculture. In conclusion, the proposed framework and the accompanying KUTomaData dataset contribute to tomato recognition and maturity level classification. The framework addresses the challenges associated with tomato harvesting in real-world scenarios, while the dataset provides a dedicated resource for training and benchmarking deep learning models. The exceptional performance demonstrated by the proposed framework across multiple datasets validates its effectiveness and superiority over existing approaches. Future research can focus on further enhancing the framework's capabilities and exploring its applicability in other agricultural domains. § ACKNOWLEDGEMENTS This research is supported by ASPIRE, the technology program management pillar of Abu Dhabi’s Advanced Technology Research Council (ATRC), under the ASPIRE project “Aspire Research Institute for Food Security in the Drylands ” within Theme 1.4. 10 rm url<#>1urlprefixURL doiprefixDOI: quinet2019tomato authorQuinet, M. et al. journaltitleTomato fruit development and metabolism. Frontiers in plant science volume10, pages1554 (year2019). bapat2010ripening authorBapat, V. A. et al. journaltitleRipening of fleshy fruit: molecular insight and the role of ethylene. Biotechnology advances volume28, pages94–107 (year2010). oltman2014consumer authorOltman, A., authorJervis, S. & authorDrake, M. journaltitleConsumer attitudes and preferences for fresh market tomatoes. Journal of food science volume79, pagesS2091–S2097 (year2014). sangbamrung2020novel authorSangbamrung, I., authorPraneetpholkrang, P. & authorKanjanawattana, S. journaltitleA novel automatic method for cassava disease classification using deep learning. Journal of Advances in Information Technology Vol volume11, pages241–248 (year2020). adeel2019diagnosis authorAdeel, A. et al. journaltitleDiagnosis and recognition of grape leaf diseases: An automated system based on a novel saliency approach and canonical correlation analysis based multiple features fusion. Sustainable Computing: Informatics and Systems volume24, pages100349 (year2019). hamdani2021detection authorHamdani, H., authorSeptiarini, A., authorSunyoto, A., authorSuyanto, S. & authorUtaminingrum, F. journaltitleDetection of oil palm leaf disease based on color histogram and supervised classifier. Optik volume245, pages167753 (year2021). behera2021maturity authorBehera, S. K., authorRath, A. K. & authorSethy, P. K. journaltitleMaturity status classification of papaya fruits based on machine learning and transfer learning approach. Information Processing in Agriculture volume8, pages244–250 (year2021). tan2018recognising authorTan, K., authorLee, W. S., authorGan, H. & authorWang, S. journaltitleRecognising blueberry fruit of different maturity using histogram oriented gradients and colour features in outdoor scenes. Biosystems engineering volume176, pages59–72 (year2018). septiarini2020maturity authorSeptiarini, A. et al. titleMaturity grading of oil palm fresh fruit bunches based on a machine learning approach. In booktitle2020 Fifth International Conference on Informatics and Computing (ICIC), pages1–4 (organizationIEEE, year2020). ref7a authorEmuoyibofarhe, O. et al. journaltitleDetection and classification of cassava diseases using machine learning. International Journal of computer science and Software Engineering. (year2019). noteVol. 8, issue 7, pp. 166–176. huang2018applications authorHuang, S. et al. journaltitleApplications of support vector machine (svm) learning in cancer genomics. Cancer genomics & proteomics volume15, pages41–51 (year2018). Simonyan15 authorSimonyan, K. & authorZisserman, A. titleVery deep convolutional networks for large-scale image recognition. In booktitleInternational Conference on Learning Representations (year2015). https://doi.org/10.48550/arxiv.1605.06409 authorDai, J., authorLi, Y., authorHe, K. & authorSun, J. titleR-fcn: Object detection via region-based fully convolutional networks, <10.48550/ARXIV.1605.06409> (year2016). ren2015faster authorRen, S., authorHe, K., authorGirshick, R. & authorSun, J. journaltitleFaster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems volume28 (year2015). liu2016ssd authorLiu, W. et al. titleSsd: Single shot multibox detector. In booktitleComputer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages21–37 (organizationSpringer, year2016). fu2020faster authorFu, L., authorMajeed, Y., authorZhang, X., authorKarkee, M. & authorZhang, Q. journaltitleFaster r–cnn–based apple detection in dense-foliage fruiting-wall trees using rgb and depth features for robotic harvesting. Biosystems Engineering volume197, pages245–256 (year2020). shi2020attribution authorShi, R., authorLi, T. & authorYamaguchi, Y. journaltitleAn attribution-based pruning method for real-time mango detection with yolo network. Computers and electronics in agriculture volume169, pages105214 (year2020). sun2018detection authorSun, J. et al. journaltitleDetection of key organs in tomato based on deep migration learning in a complex background. Agriculture volume8, pages196 (year2018). liu2020tomato authorLiu, J. & authorWang, X. journaltitleTomato diseases and pests detection based on improved yolo v3 convolutional neural network. Frontiers in plant science volume11, pages898 (year2020). yolov3 authorRedmon, J. & authorFarhadi, A. journaltitleYolov3: An incremental improvement. arXiv (year2018). xu2020light authorXu, Z.-F., authorJia, R.-S., authorSun, H.-M., authorLiu, Q.-M. & authorCui, Z. journaltitleLight-yolov3: fast method for detecting green mangoes in complex scenes using picking robots. Applied Intelligence volume50, pages4670–4687 (year2020). yu2019fruit authorYu, Y., authorZhang, K., authorYang, L. & authorZhang, D. journaltitleFruit detection for strawberry harvesting robot in non-structural environment based on mask-rcnn. Computers and Electronics in Agriculture volume163, pages104846 (year2019). kang2020fruit authorKang, H. & authorChen, C. journaltitleFruit detection, segmentation and 3d visualisation of environments in apple orchards. Computers and Electronics in Agriculture volume171, pages105302 (year2020). hasan2019deep authorHasan, M., authorTanawala, B. & authorPatel, K. J. titleDeep learning precision farming: Tomato leaf disease detection by transfer learning. In booktitleProceedings of 2nd international conference on advanced computing and software engineering (ICACSE) (year2019). dhanya2022deep authorDhanya, V. et al. journaltitleDeep learning based computer vision approaches for smart agricultural applications. Artificial Intelligence in Agriculture (year2022). FRANCIS2005318 authorFrancis, C. titleCrop rotations. In editorHillel, D. (ed.) booktitleEncyclopedia of Soils in the Environment, pages318–322, <https://doi.org/10.1016/B0-12-348530-4/00253-8> (publisherElsevier, addressOxford, year2005). VLAICULESCU2022329 authorVlaiculescu, A. & authorVarrone, C. titleChapter 14 - sustainable and eco-friendly alternatives to reduce the use of pesticides. In editorSingh, P., editorSingh, S. & editorSillanpää, M. (eds.) booktitlePesticides in the Natural Environment, pages329–364, <https://doi.org/10.1016/B978-0-323-90489-6.00014-8> (publisherElsevier, year2022). mitchell1993flood authorMitchell, A. R. & authorVan Genuchten, M. T. journaltitleFlood irrigation of a cracked soil. Soil Science Society of America Journal volume57, pages490–497 (year1993). su12124859 authorM. Tahat, M., authorM. Alananbeh, K., authorA. Othman, Y. & authorI. Leskovar, D. journaltitleSoil health and sustainable agriculture. Sustainability volume12, <10.3390/su12124859> (year2020). reicosky2003advances authorReicosky, D. & authorAllmaras, R. journaltitleAdvances in tillage research in north american cropping systems. Journal of Crop Production volume8, pages75–125 (year2003). strand2000some authorStrand, J. F. journaltitleSome agrometeorological aspects of pest and disease management for the 21st century. Agricultural and Forest Meteorology volume103, pages73–82 (year2000). KAISER202051 authorKaiser, N. et al. journaltitleThe role of conventional plant breeding in ensuring safe levels of naturally occurring toxins in food crops. Trends in Food Science & Technology volume100, pages51–66, <https://doi.org/10.1016/j.tifs.2020.03.042> (year2020). sladojevic2016deep authorSladojevic, S., authorArsenovic, M., authorAnderla, A., authorCulibrk, D. & authorStefanovic, D. journaltitleDeep neural networks based recognition of plant diseases by leaf image classification. Computational intelligence and neuroscience volume2016 (year2016). sherafati2022tomatoscan authorSherafati, A., authorMollazade, K., authorSaba, M. K. & authorVesali, F. journaltitleTomatoscan: An android-based application for quality evaluation and ripening determination of tomato fruit. Computers and Electronics in Agriculture volume200, pages107214 (year2022). zheng2022research authorZheng, T., authorJiang, M., authorLi, Y. & authorFeng, M. journaltitleResearch on tomato detection in natural environment based on rc-yolov4. Computers and Electronics in Agriculture volume198, pages107029 (year2022). bochkovskiy2020yolov4 authorBochkovskiy, A., authorWang, C.-Y. & authorLiao, H.-Y. M. titleYolov4: Optimal speed and accuracy of object detection (year2020). 2004.10934. xu2022visual authorXu, P. et al. journaltitleVisual recognition of cherry tomatoes in plant factory based on improved deep instance segmentation. Computers and Electronics in Agriculture volume197, pages106991 (year2022). he2018mask authorHe, K., authorGkioxari, G., authorDollár, P. & authorGirshick, R. titleMask r-cnn (year2018). 1703.06870. rong2021peduncle authorRong, J., authorDai, G. & authorWang, P. journaltitleA peduncle detection method of tomato for autonomous harvesting. Complex & Intelligent Systems pages1–15 (year2021). bolya2019yolact authorBolya, D., authorZhou, C., authorXiao, F. & authorLee, Y. journaltitleYolact: Better real-time instance segmentation. arXiv preprint arXiv:1912.06218 (year2019). arad2020development authorArad, B. et al. journaltitleDevelopment of a sweet pepper harvesting robot. Journal of Field Robotics volume37, pages1027–1039 (year2020). xiong2020autonomous authorXiong, Y., authorGe, Y., authorGrimstad, L. & authorFrom, P. J. journaltitleAn autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. Journal of Field Robotics volume37, pages202–224 (year2020). liu2020identification authorLiu, C., authorLi, H., authorSu, A., authorChen, S. & authorLi, W. journaltitleIdentification and grading of maize drought on rgb images of uav based on improved u-net. IEEE Geoscience and Remote Sensing Letters volume18, pages198–202 (year2020). ronneberger2015unet authorRonneberger, O., authorFischer, P. & authorBrox, T. titleU-net: Convolutional networks for biomedical image segmentation. In booktitleMedical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages234–241 (organizationSpringer, year2015). vit authorDosovitskiy, A. et al. titleAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. noteInternational Conference on Learning Representations (ICLR), 2021. dosovitskiy2020image authorDosovitskiy, A. et al. journaltitleAn image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (year2020). wang2021swingd authorWang, J. et al. journaltitleSwingd: A robust grape bunch detection model based on swin transformer in complex vineyard environment. Horticulturae volume7, pages492 (year2021). zheng2022swin authorZheng, H., authorWang, G. & authorLi, X. journaltitleSwin-mlp: A strawberry appearance quality identification method by swin transformer and multi-layer perceptron. Journal of Food Measurement and Characterization volume16, pages2789–2800 (year2022). guo2022cst authorGuo, Y., authorLan, Y. & authorChen, X. journaltitleCst: Convolutional swin transformer for detecting the degree and types of plant diseases. Computers and Electronics in Agriculture volume202, pages107407 (year2022). rs14225853 authorLu, S. et al. journaltitleSwin-transformer-yolov5 for real-time wine grape bunch detection. Remote Sensing volume14, <10.3390/rs14225853> (year2022). liu2021swin authorLiu, Z. et al. titleSwin transformer: Hierarchical vision transformer using shifted windows. In booktitleProceedings of the IEEE/CVF international conference on computer vision, pages10012–10022 (year2021). javaid2022enhancing authorJavaid, M., authorHaleem, A., authorSingh, R. P. & authorSuman, R. journaltitleEnhancing smart farming through the applications of agriculture 4.0 technologies. International Journal of Intelligent Networks volume3, pages150–164 (year2022). Laboroai authorLaboroai. titleLaboroai/laborotomato. 10.3389/fpls.2020.571299 authorAfonso, M. et al. journaltitleTomato fruit detection and counting in greenhouses using deep learning. Frontiers in Plant Science volume11, <10.3389/fpls.2020.571299> (year2020). shorten2019survey authorShorten, C. & authorKhoshgoftaar, T. M. journaltitleA survey on image data augmentation for deep learning. Journal of big data volume6, pages1–48 (year2019). wang2020deep authorWang, J. et al. titleDeep high-resolution representation learning for visual recognition (year2020). 1908.07919. yu2021litehrnet authorYu, C. et al. titleLite-hrnet: A lightweight high-resolution network (year2021). 2104.06403. Tan2019EfficientNetRM authorTan, M. & authorLe, Q. V. journaltitleEfficientnet: Rethinking model scaling for convolutional neural networks. ArXiv volumeabs/1905.11946 (year2019). huang2018densely authorHuang, G., authorLiu, Z., authorvan der Maaten, L. & authorWeinberger, K. Q. titleDensely connected convolutional networks (year2018). 1608.06993. he2015deep authorHe, K., authorZhang, X., authorRen, S. & authorSun, J. titleDeep residual learning for image recognition (year2015). 1512.03385. lsnAnalysis authorFrosst, N., authorPapernot, N. & authorHinton, G. titleAnalyzing and improving representations with the soft nearest neighbor loss. In booktitleInternational conference on machine learning, pages2012–2020 (organizationPMLR, year2019). lft authorAbraham, N. & authorKhan, N. M. titleA Novel Focal Tversky loss function with improved Attention U-Net for lesion segmentation. In booktitleIEEE 16th International Symposium on Biomedical Imaging (ISBI) (year2019). lde authorRaja, H., authorHassan, T., authorAkram, M. U. & authorWerghi, N. journaltitleClinically verified hybrid deep learning system for retinal ganglion cells aware grading of glaucomatous progression. IEEE Transactions on Biomedical Engineering volume68, pages2140–2151 (year2020). 7780459 authorHe, K., authorZhang, X., authorRen, S. & authorSun, J. titleDeep residual learning for image recognition. In booktitle2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages770–778, <10.1109/CVPR.2016.90> (year2016). he2016deep authorHe, K., authorZhang, X., authorRen, S. & authorSun, J. titleDeep residual learning for image recognition. In booktitleProceedings of the IEEE conference on computer vision and pattern recognition, pages770–778 (year2016). xie2021segformer authorXie, E. et al. journaltitleSegformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems volume34, pages12077–12090 (year2021). zhao2017pyramid authorZhao, H., authorShi, J., authorQi, X., authorWang, X. & authorJia, J. titlePyramid Scene Parsing Network. In booktitle2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages6230–6239, <10.1109/CVPR.2017.660> (publisherIEEE Computer Society, addressLos Alamitos, CA, USA, year2017). badrinarayanan2017segnet authorBadrinarayanan, V., authorKendall, A. & authorCipolla, R. journaltitleSegnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence volume39, pages2481–2495 (year2017). setr authorZheng, S. et al. titleRethinking semantic segmentation from a sequence-to-sequence perspective with transformers. noteIEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2021. deepfruits authorSa, I. et al. titleDeepfruits: A fruit detection system using deep neural networks. noteSensors, 2016. COS authorFukuda, M. et al. titleCentral object segmentation by deep learning for fruits and other roundish objects. noteArXiv, 2020. CWD authorCicco, M. D. et al. titleAutomatic model based dataset generation for fast and accurate crop and weeds detection. noteIEEE/RSJ IROS, 2017. Horticulture authorNi, X. et al. titleDeep learning image segmentation and extraction of blueberry fruit traits associated with harvestability and yield. noteNature Horticulture Research, 2020.
http://arxiv.org/abs/2307.03092v1
20230706160753
On the solvability of boundary value problems for linear differential-algebraic equations with constant coefficients
[ "Anar Assanova", "Carsten Trunk", "Roza Uteshova" ]
math.CA
[ "math.CA", "math.FA", "Primary 34A09, 34B05, Secondary 34B99, 15A22" ]
On the solvability of BVP for linear DAE]On the solvability of boundary value problems for linear differential-algebraic equations with constant coefficients Institute of Mathematics and Mathematical Modeling, Pushkin Str. 125, 050010, Almaty, Kazakhstan [email protected] < g r a p h i c s > This paper is supported by European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement ID: 873071, project SOMPATY (Spectral Optimization: From Mathematics to Physics and Advanced Technology) Institut für Mathematik, Technische Universität Ilmenau, Postfach 100565, D-98684, Ilmenau, Germany [email protected] Institute of Mathematics and Mathematical Modeling, Pushkin Str. 125, 050010, Almaty, Kazakhstan [email protected] [2020]Primary 34A09, 34B05; Secondary 34B99, 15A22 We study a two-point boundary value problem for a linear differen­tial-algebraic equation with constant coefficients by using the method of parameterization. The parameter is set as the value of the continuously differentiable component of the solution at the left endpoint of the interval. Applying the Weierstrass canonical form to the matrix pair associated with the differential-algebraic equation, we obtain a criterion for the unique solvability of the problem. [ Roza Uteshova ================= § INTRODUCTION We consider the linear differential-algebraic equation with constant coefficients of the form Eẋ(t)=Ax(t)+f(t), t∈ (0,T), subject to the boundary condition Bx(0)+Cx(T)=d. Here E, A,B,C∈ℝ^n× n, d∈ℝ^n, T>0. We suppose that the matrix pair (E,A) is regular, i.e. det(λ E - A)≠ 0 for some λ∈ℂ. By a solution of the boundary value problem (<ref>), (<ref>) we mean a function x∈ C^1([0,T],ℝ^n) satisfying equation (<ref>) and the boundary condition (<ref>). Differential-algebraic equations have become widespread over the last decades, being a tool for modeling and simulation of dynamical systems with constraints in numerous applications Kunkel:2006,brenan1995numerical,ascher1998computer,lamour2013differential, samoilenko2000linear,boichuk2004generalized,riaza2008differential. The theory of boundary value problems for differential-algebraic equations started to develop by applying modified versions of the shooting and collocation methods designed for boundary value problems for ordinary differential equations marz1984difference,clark1989numerical,lamour1991well,lamour1997shooting,BAI1991269,bai1992modified,ascher1992projected,stover2001collocation,kunkel2002symmetric. In <cit.>, P. Amodio and F. Mazzia studied problem (<ref>), (<ref>) by the method of boundary values. R. März applied the method of projectors and methods of perturbation theory <cit.> to problem (<ref>), (<ref>). The monograph by R. Lamour, R. März, and C. Tischendorf <cit.>, devoted to the projector study of differential-algebraic equations, provides a detailed review in this field. A series of papers by C. Trunk et al. <cit.> investigate possibilities of generalization and extension of the Kronecker canonical form to differential-algebraic equations with rectangular matrices. A number of methods and approaches have been developed for constructing their solutions. However, the methods developed for solving problem (<ref>), (<ref>) may not always be applicable for a wide class of boundary conditions. As a consequence, this requires the development of new methods or modification of known methods of the theory of differential equations, which would be applicable to differential-algebraic equations and would be of constructive nature. In this paper, we use the method of parameterization proposed by Dzhumabaev <cit.>, which has proven to be an efficient constructive method allowing both to derive criteria for the unique solvability and obtain approximate solutions of various classes of boundary value problems <cit.>. This method was originally proposed for solving the linear boundary value problem (<ref>), (<ref>) provided detE≠ 0. In this case, a criterion for the unique solvability was obtained in terms of coefficients and an algorithm for approximate solution was developed. Our goal is to apply the method of parameterization to the boundary value problem (<ref>), (<ref>) in the case when the matrix E is not necessarily non-singular. We derive a criterion for the existence of a unique solution under certain assumptions on the matrices of the boundary condition. § MAIN RESULTS We introduce the parameter μ∈ℝ^n defined as Eμ:=Ex(0). By substituting u(t):=x(t)-μ, equation (<ref>) is transformed into the initial value problem with parameter Eu̇(t)=A(u(t)+μ)+f(t), t ∈ (0,T), Eu(0)=0. and the boundary condition (<ref>) becomes B(μ+u(0))+C(μ+u(T))=d. The following statement shows the equivalence of the boundary value problem (<ref>), (<ref>) and the boundary value problem with parameter (<ref>)-(<ref>). If x^∗∈ C^1([0,T],ℝ^n) is a solution of problem (<ref>), (<ref>), then the pair (μ^∗,u^∗)∈ℝ^n× C^1([0,T],ℝ^n), where Eμ^∗=Ex^∗(0) and u^∗(t)=x^∗(t)-μ^∗, is a solution of problem (<ref>)-(<ref>). Conversely, if a pair (μ^∗∗,u^∗∗)∈ℝ^n× C^1([0,T],ℝ^n) is a solution of problem (<ref>)-(<ref>), then the function x^∗∗∈ C^1([0,T],ℝ^n), defined by x^∗∗(t)=μ^∗∗+u^∗∗(t), is a solution of problem (<ref>), (<ref>). Let P and Q be non-singular matrices which transform (<ref>) and (<ref>) to Weierstrass canonical form <cit.>, i.e., PEQ=[ I_n_1 0; 0 N ], PAQ=[ J 0; 0 I_n_2 ], Pf=[ f̃_1; f̃_2 ], where J is an n_1× n_1 matrix in Jordan canonical form and N is an n_2× n_2 nilpotent matrix also in Jordan canonical form; n_1+n_2=n. Following <cit.>, we call the index of nilpotency of N in (<ref>) the index of the matrix pair (E,A), denoted by ν=ind(E,A). According to the space decomposition given by (<ref>), we have the function ũ(t)=(ũ_1(t),ũ_2(t))^T :=Q^-1u(t) and the vector μ̃=(μ̃_1,μ̃_2)^T :=Q^-1μ with ũ_j∈ C^1([0,T],ℝ^n_j) and μ̃_j∈ℝ^n_j, j=1,2. We then can rewrite problem (<ref>), (<ref>) in the following form: ũ̇_1(t)=J(ũ_1(t)+μ̃_1)+ f̃_1(t), ũ_1(0)=0, Nũ̇_2(t)=ũ_2(t)+μ̃_2+ f̃_2(t), Nũ_2(0)=0. If we set B̃:=BQ and C̃:=CQ, the boundary condition (<ref>) transforms into B̃(μ+u(0))+C̃(μ̃+ũ(T))=d. By a solution of the boundary value problem (<ref>)-(<ref>) we mean a pair (μ̃,ũ)∈ℝ^n× C^1([0,T],ℝ^n) with μ̃=(μ̃_1,μ̃_2)^T and ũ(t)=(ũ_1(t),ũ_2(t))^T satisfying the initial value problems (<ref>), (<ref>) and (<ref>), (<ref>), and the boundary condition (<ref>). Before we start to study problem (<ref>)-(<ref>), let us note that we are able to write down explicitly the solutions of the initial value problems (<ref>),(<ref>) and (<ref>),(<ref>). For any μ̃_1∈ℝ^n_1, the initial value problem for the linear differential equation (<ref>),(<ref>) has the unique solution ũ_1(t)=J∫_0^t e^(t-s)Jdsμ̃_1+∫_0^t e^(t-s)Jf̃_1(s) ds. The general solution of the ordinary differential equation (<ref>) has the form <cit.> ũ_1(t)= e^t Jũ_1(0)+J∫_0^t e^(t-s)Jdsμ̃_1+∫_0^t e^(t-s)Jf̃_1(s) ds. Applying the initial condition (<ref>), we obtain (<ref>). By Lemma 2.8 <cit.>, equation (<ref>) for fixed μ̃_2 has the unique solution ũ_2(t)=-∑_i=0^ν-1N^i[μ̃_2+f̃_2(t)]^(i)=-μ̃_2-∑_i=0^ν-1N^if̃_2^(i)(t), without specifying any initial values. Hence, taking into account (<ref>), we obtain that the second component of the parameter μ̃ is uniquely determined by μ̃_2=-∑_i=0^ν-1N^if̃_2^(i)(0). Let us now turn to the boundary value problem with parameter (<ref>)-(<ref>). The second component (μ̃_2,ũ_2(t)) of its solution (μ̃,ũ(t)) is already known from (<ref>) and (<ref>). So we are interested in finding only the first component (μ̃_1,ũ_1(t)) with μ̃_1 and ũ_1(t) satisfying (<ref>). Hence, the appropriate number of imposed boundary conditions must coincide with the number n_1 of differential equations in (<ref>). So, it is reasonable to assume that the matrices B̃, C̃∈ℝ^n × n and the right-hand side d∈ℝ^n of the boundary condition (<ref>) are of the form B̃=[ B̃_1 B̃_2; 0 0 ], C̃=[ C̃_1 C̃_2; 0 0 ], d=[ d_1; 0 ], where B̃_1,C̃_1∈ℝ^n_1× n_1, B̃_2,C̃_2∈ℝ^n_1× n_2, d_1∈ℝ^n_1. Now, inserting (<ref>) into (<ref>) and taking into account Lemma <ref> and (<ref>), we obtain the following algebraic equation in μ̃_1: D̃μ̃_1=d̃, where D̃=B̃_1+C̃_1+C̃_1 J∫_0^T e^(T-s)Jds and d̃=d_1-C̃_1∫_0^T e^(T-s)Jf̃_1(s) ds+B̃_2∑_i=0^ν-1N^if̃_2^(i)(0)+C̃_2∑_i=0^ν-1N^if̃_2^(i)(T). The equation (<ref>) has a unique solution if the matrix D̃ is non-singular. Let us assume, for instance, that J=diag(λ_1,…,λ_n_1). In this case , D̃=B̃_1+C̃_1diag(1+λ_1 ∫_0^T e^(T-s)λ_1ds,…,1+λ_n_1∫_0^T e^(T-s)λ_n_1ds ), and D̃ is non-singular if det(B̃_1+C̃_1 e^TJ)≠ 0. Suppose that the matrix D̃ is non-singular. Then equation (<ref>) has the unique solution μ̃_1=D̃^-1d̃. Substituting μ̃_̃1̃ into (<ref>), we obtain ũ_1(t) and hence the first components of the unique solution (μ̃,ũ(t)) of the boundary value problem with parameter (<ref>)-(<ref>): (μ̃_1,ũ_1(t))= (D̃^-1d̃,  ∫_0^t e^(t-s)JdsJD̃^-1d̃+∫_0^t e^(t-s)Jf̃_1(s) ds). As previously stated, the second components of (μ̃,ũ(t)) are determined by (<ref>) and (<ref>): (μ̃_2,ũ_2(t))= (-∑_i=0^ν-1N^if̃_2^(i)(0),   -∑_i=0^ν-1N^i[f̃_2^(i)(t)-f̃_2^(i)(0)]). Thus, taking into account the equivalence of the original problem (<ref>), (<ref>) and problems with parameter (<ref>)-(<ref>) and (<ref>)-(<ref>), we can summarize our results. Let (E,A) be a regular pair of square matrices and let P and Q be nonsingular matrices which transform (<ref>) to Weierstrass canonical form (<ref>). Furthermore, let ν=ind(E,A) and f∈ C^ν([0,T],ℝ^n). Then the boundary value problem with parameter (<ref>)-(<ref>), where B=B̃Q^-1, C=C̃Q^-1, and d=(d_1,0)^T, has a unique solution if and only if: * the matrix D̃=B̃_1+C̃_1+C̃_1 J∫_0^T e^(T-s)Jds is non-singular; * μ̃_2=-∑_i=0^ν-1N^if̃_2^(i)(0). The unique solution (μ̃, ũ(t)=[ (μ̃_1,ũ_1(t)); (μ̃_2,ũ_2(t)) ] of the boundary value problem with parameter (<ref>)-(<ref>) is determined by (<ref>) and (<ref>). Under the assumptions of Theorem <ref> the boundary value problem (<ref>), (<ref>) with B=B̃Q^-1, C=C̃Q^-1, and d=(d_1,0)^T has a unique solution x(t)=Q(μ̃+ũ(t)). We can also apply our method to initial value problems for linear differential-algebraic equations with constant coefficients. Indeed, if in the boundary condition (<ref>) we replace B and C by the identity matrix I and zero matrix of order n, respectively, we get an initial condition x(0)=d, d∈ℝ^n. Then, the parameter μ̃ defined as μ̃=Q^-1μ, where Eμ=Ex(0), satisfies the equation Qμ̃=d. The matrix Q is non-singular by assumption, so the initial value problem (<ref>),(<ref>) has a unique solution whenever d_2=∑_i=0^ν-1N^if̃_2^(i)(0). We apply the method of parameterization to problem (<ref>), (<ref>) under assumption that the matrix pair (E,A) is regular. The matrix E, as well as the matrix A, can be either singular or non-singular. Note, however, that the proposed method is not applicable if E=O (when equation (<ref>) is purely algebraic). This is due to the choice of the parameter μ: Eμ=Ex(0). amsplain 99 amodio1997numerical P. Amodio and F. Mazza, Numerical solution of differential algebraic equations and computation of consistent initial/boundary conditions, J. Comput. Appl. Math. 87 (1997), no. 1, 135–146. asanova2013well A. T. Asanova and D. S. Dzhumabaev, Well-posedness of nonlocal boundary value problems with integral condition for the system of hyperbolic equations, J. Math. Anal. Appl. 402 (2013), no. 1, 167–178. ascher1992projected U. M. Ascher and L. R. Petzold, Projected collocation for higher-order higher-index differential-algebraic equations, J. Comput. Appl. Math. 43 (1992), no. 1-2, 243–259. ascher1998computer U. M. Ascher and L. R. Petzold, Computer methods for ordinary differential equations and differential-algebraic equations, SIAM, 1998. assanova2022solution A. Asanova and R. Uteshova, Solution of a nonlocal problem for hyperbolic equations with piecewise constant argument of generalized type, Chaos, Solitons, Fractals 165 (2022), 112816. BAI1991269 Y. Bai, A perturbed collocation method for boundary-value problems in differential-algebraic equations, Appl. Math. Comput. 45 (1991), no. 3, 269–291. bai1992modified Y. Bai, A modified Lobatto collocation for linear boundary value problems of differential-algebraic equations, Computing 49 (1992), no. 2, 139–150. berger2021linear T. Berger, H. De Snoo, C. Trunk, and H. Winkler, Linear relations and their singular chains, Methods Func. Anal. Topol. 27 (2021), no. 4, 287–301. berger2016linear T. Berger, C. Trunk, and H. Winkler, Linear relations and the Kronecker canonical form, Linear Algebra Appl. 488 (2016), 13–44. boichuk2004generalized A. A. Boichuk and A. M. Samoilenko, Generalized inverse operators and Fredholm boundary-value problems, De Gruyter, 2004. brenan1995numerical K. E. Brenan, S. L. Campbell, and L. R. Petzold, Numerical solution of initial-value problems in differential-algebraic equations, SIAM, 1995. clark1989numerical K. D. Clark and L. R. Petzold, Numerical solution of boundary value problems in differential-algebraic systems, SIAM J. Sci. Stat. Comput. 10 (1989), no. 5, 915–936. Dzhumabaev:1989 D. S. Dzhumabaev, Criteria for the unique solvability of a linear boundary-value problem for an ordinary differential equation, USSR Comput. Math. Math. Phys. 29 (1989), no. 1, 34–46. dzhumabaev2010method D. S. Dzhumabaev, A method for solving the linear boundary value problem for an integro-differential equation, Comput. Math. Math. Phys. 50 (2010), no. 7, 1150–1161. dzhumabaev2016one D. S. Dzhumabaev, On one approach to solve the linear boundary value problems for Fredholm integro-differential equations, J. Comput. Appl. Math. 294 (2016), 342–357. dzhumabaev2018computational D. S. Dzhumabaev, Computational methods of solving the boundary value problems for the loaded differential and Fredholm integro-differential equations, Math. Methods Appl. Sci. 41 (2018), no. 4, 1439–1462. gernandt2023characteristic K. D. Clark and L. R. Petzold, On characteristic invariants of matrix pencils and linear relations, to appear in SIAM J. Matrix Anal. Appl. (2023). hartman2002ordinary P. Hartman, Ordinary differential equations, SIAM, 2002. Kunkel:2006 P. Kunkel and V. Mehrmann, Differential-algebraic equations: Analysis and numerical solution, European Mathematical Society, 2006. kunkel2002symmetric P. Kunkel and R. Stöver, Symmetric collocation methods for linear differential-algebraic boundary value problems, Numer. Math. 91 (2002), 475–501. lamour1991well R. Lamour, A well-posed shooting method for transferable DAE's, Numer. Math. 59 (1991), 815–829. lamour1997shooting R. Lamour, A shooting method for fully implicit index-2 differential algebraic equations, SIAM J. Sci. Comput. 18 (1997), no. 1, 94–114. lamour2013differential R. Lamour, R. März, and C. Tischendorf, Differential-algebraic equations: a projector based analysis, Springer Science & Business Media, 2013. leben2021finite L. Leben, F. Martínez - Pería, F. Philipp, C. Trunk, and H. Winkler, Finite rank perturbations of linear relations and matrix pencils, Complex Anal. Oper. Theory 15 (2021), 1–37. marz1984difference R. März, On difference and shooting methods for boundary value problems in differential-algebraic equations, ZAMM, Z. Angew. Math. Mech. 64 (1984), no. 11, 463–473. marz1996canonical R. März, Canonical projectors for linear differential algebraic equations, Comput. Math. Appl. 31 (1996), no. 4-5, 121–135. marz2004solvability R. März, Solvability of linear differential algebraic equations with properly stated leading terms, Result. Math. 45 (2004), 88–105. marz2005characterizing R. März, Characterizing differential algebraic equations without the use of derivative arrays, Comput. Math. Appl. 50 (2005), no. 7, 1141–1156. riaza2008differential R. Riaza, Differential-algebraic systems: Analytical aspects and circuit applications, World Scientific Publishing, 2008. samoilenko2000linear A. M. Samoilenko, M. I. Shkil, and V. P. Yakovets, Linear systems of differential equations with degeneration, Vyshcha Shkola, 2000. stover2001collocation R. Stöver, Collocation methods for solving linear differential-algebraic boundary value problems, Numer. Math. 88 (2001), 771–795.
http://arxiv.org/abs/2307.00793v1
20230703072153
The Building Data Genome Directory -- An open, comprehensive data sharing platform for building performance research
[ "Xiaoyu Jin", "Chun Fu", "Hussain Kazmi", "Atilla Balint", "Ada Canaydin", "Matias Quintana", "Filip Biljecki", "Fu Xiao", "Clayton Miller" ]
stat.AP
[ "stat.AP" ]
Preprint accepted at CISBAT 2023 - The Built Environment in Transition, Hybrid International Conference, EPFL, Lausanne, Switzerland, 13-15 September 2023 ^1 Department of Building Environment and Energy Engineering, The Hong Kong Polytechnic University, Hong Kong ^2 College of Design and Engineering, National University of Singapore (NUS), Singapore ^3 Department of Electrical Engineering, KU Leuven, Belgium ^4 Future Cities Laboratory Global, Singapore-ETH Centre, Singapore ^*[email protected] The building sector plays a crucial role in the worldwide decarbonization effort, accounting for significant portions of energy consumption and environmental effects. However, the scarcity of open data sources is a continuous challenge for built environment researchers and practitioners. Although several efforts have been made to consolidate existing open datasets, no database currently offers a comprehensive collection of building data types with all subcategories and time granularities (e.g., year, month, and sub-hour). This paper presents the Building Data Genome Directory, an open data-sharing platform serving as a one-stop shop for the data necessary for vital categories of building energy research. The data directory is an online portal (http://buildingdatadirectory.org/<buildingdatadirectory.org/>) that allows filtering and discovering valuable datasets. The directory covers meter, building-level, and aggregated community-level data at the spatial scale and year-to-minute level at the temporal scale. The datasets were consolidated from a comprehensive exploration of sources, including governments, research institutes, and online energy dashboards. The results of this effort include the aggregation of 60 datasets pertaining to building energy ontologies, building energy models, building energy and water data, electric vehicle data, weather data, building information data, text-mining-based research data, image data of buildings, fault detection diagnosis data and occupant data. A crowdsourcing mechanism in the platform allows users to submit datasets they suggest for inclusion by filling out an online form. This directory can fuel research and applications on building energy efficiency, which is an essential step toward addressing the world's energy and environmental challenges. § INTRODUCTION The rise of artificial intelligence as a tool for built environment applications has the potential to impact several industries significantly. However, data availability in the built environment domain remains a critical bottleneck due to privacy concerns and acquisition costs <cit.>. Open data sources are essential for understanding energy consumption patterns, identifying areas for improvement, and testing energy-saving strategies, especially in the absence of in situ measurements. Yet, access to open data sources in the built environment domain lags behind other communities <cit.>, posing limitations for researchers and practitioners in developing effective energy-saving solutions <cit.>. In addition to limited accessibility, available open datasets are often dispersed and require labor-intensive and time-consuming collation due to varying formats and sources <cit.>. Efforts have been made to aggregate open datasets and share them through platforms or directories such as the Building Performance Database (BPD) <cit.>, the Building Data Genome (BDG) projects <cit.>, and the Directory of Buildings Energy Consumption Datasets (DBECD) <cit.>. However, these projects have limitations in the diversity of data types, lack of user contributions, and missing data. This paper outlines the development of a comprehensive data-sharing platform for building performance research. This effort is achieved by creating a data directory that is publicly available and includes functions for filtering, visualization, and uploading new data sets. The Building Data Genome Directory is a lightweight web app that links to a wide range of open datasets, offering users easy access to comprehensive coverage of relevant information. In subsequent sections, the paper will introduce the data sources, data category definitions, reasons for inclusion, critical functions of the web app, and some application cases. § DATA SOURCES The directory focuses on collecting information about open building performance datasets that are widely dispersed and fragmented, which conventionally would require a rigorous data collection process. Metadata for the directory was gathered from various open data sources, including government disclosure programs, research projects, institutes, and publicly available dashboards. Details on each of these data source categories are discussed in the following subsections. The directory data sources are divided according to category and type of data based on the format (e.g., tabular, image) and process of the system that created the data (e.g., HVAC, occupants, sensors). Figure <ref> shows an overview of the data set categories, which will be outlined in the following subsections. §.§ Government disclosure data Data from government disclosure programs is a significant source for built environment data. One example is the Local Law 84 (LL84) of New York City (NYC) in the United States, which requires building owners to disclose their energy and water consumption data through benchmarking annually <cit.>. This directive has led to the publication of the Energy and Water Data Disclosure dataset for Local Law 84 by the NYC government. These city-level datasets can contain many samples, with some featuring tens of thousands of buildings, although they may have coarse-grained time intervals of a year or a month. To collect these datasets, a comprehensive review of relevant literature and examination of laws pertaining to data disclosure was conducted <cit.>. Open data portals provided by city governments <cit.>, such as the NYC open data portal (<https://opendata.cityofnewyork.us/>), were also browsed to gather available datasets, ensuring the comprehensiveness of the data directory. §.§ Open research data Research institutes and organizations have published various datasets for building performance research. Some datasets are available on websites, such as the Building Data Genome dataset on Kaggle <cit.> or the 3D city model of Singapore public housing buildings on GitHub <cit.>. Other datasets are published through journals, with Scientific Data being a significant venue. A recent review has also listed open-source datasets for building energy demand <cit.>. These datasets typically provide detailed information about individual buildings but may not have large numbers of samples (generally less than 5,000). A common differentiator of these types of data sets is that the time-series frequency may be higher, sometimes even at the minute level, offering a more granular view of a building's energy usage. Some datasets also provide detailed information about building characteristics, solar installations <cit.>, morphological indicators <cit.>, or sensor locations and building structure <cit.>. Accessing and leveraging these datasets allows researchers to gain comprehensive insights into individual buildings and their energy usage. To collect these datasets, relevant reviews and research papers were examined, including platforms that provide access to datasets referenced in articles, §.§ Data collected from open, online dashboards In response to the growing emphasis on net-zero and sustainability goals in the higher education sector, many educational institutes and universities, such as the University of California, Berkeley, Cornell University, and Princeton University, have public energy management dashboards that provide access to energy usage data for further study and analysis. For these datasets, a data acquisition pipeline can be built using scripts to automate the process of extraction from these dashboards, enabling batch downloads of performance data from thousands of buildings. The directory includes several datasets that were retrieved from these types of public web-based energy management dashboards. For many of these dashboards, the API of the data source can usually be found using built-in web browser developer tools. Once the data API is identified, an automated process can be configured with the required data parameters, such as building ID and specific time period, to enable batch downloading of performance data from a web-based dashboard. § OVERVIEW OF THE DIRECTORY INTERFACE The Building Data Genome Directory can be found online at: http://buildingdatadirectory.org/<buildingdatadirectory.org/>. The interface comprises of a main page, referred to as the Meta Directory, which provides an overview of all available datasets and several sub-pages presenting datasets by types. The Meta Directory page introduces the Building Data Genome Directory and outlines the scope of the collected datasets. As a web app, it has filtering, visualization, and uploading functions for the datasets. Datasets pertaining to buildings, such as Building Energy and Water and Building Information provide geospatial granularity levels that correspond to individual buildings or, at the very least, communities, instead of the aggregated data of an entire city. The Meta Directory includes a schematic diagram showcasing the various datasets available in the Building Data Genome Directory, as shown in Figure <ref>. Each black label in the diagram represents a specific data type and has a corresponding subpage, with its link conveniently located on the left column of the web page. The scope description for these types and the representative datasets are presented in Table <ref>. The Add New Dataset uploading function is at the bottom of the left-hand button. Users must fill in the Dataset Name, URL, and Dataset Type items to submit a possible contribution to the directory. The datasets submitted by the users will be stored and displayed at the bottom of the Meta Directory page, and they will be added to the directory after undergoing a review process. The category with the highest number of data sets is Building Energy and Water, which includes over 30 datasets at the moment. A metadata table that provides essential information about the datasets is displayed on this page, including disclosure status (e.g., data opening level, license availability, organization) and information on the building samples. Figure <ref> shows the filtering and visualization functions. The filtering functions enable users to select datasets by location, time interval, and building type. The visualization functions include bar plots with adjustable axes to visualize numerical information, bubble plots to display sample and variable numbers with the size of circles denoting sample sizes and variable quantities, and heatmaps to visualize variable categories. § CONCLUSION AND FUTURE WORKS The Building Data Genome Directory is a potentially valuable resource for building energy research, providing comprehensive datasets and web app functions for filtering, visualization, and uploading. This directory can be a starting point for researchers and analysts who want to start the exploration process for applicable open data sets for their studies. Numerous research endeavors are anticipated to emerge as branches stemming from this directory. As highlighted by Jin et al. <cit.>, the availability of comprehensive datasets will significantly expedite research in building energy, encompassing areas such as building energy management, grid management, and socio-economic analysis. The team is developing a sub-branch within the Building Data Genome Directory focusing on time-series feature analysis utilizing energy consumption data. §.§ Future expansion and data quality considerations Future work can optimize the directory by improving functions such as allowing brief dataset descriptions during uploading and incorporating semantic searching capabilities. Enhancing search capabilities for different data types, such as geographic location, would also improve usability, as well as considering unconventional data sources such as scraping relevant data on buildings from property websites <cit.> and considering volunteered geographic information such as OpenStreetMap <cit.> in locations that have data of reliable quality. Finally, to strengthen the crowdsourcing aspect of our platform, we plan to implement a functionality to allow users to flag erroneous information and allow trusted users to edit the database. Building a community around the directory would foster user communication and optimize the web app. Collecting feedback and insights through discussions and forums would provide valuable inputs for enhancing features and usability. By actively engaging with users, the directory can continue to evolve and serve as a valuable resource for building energy researchers b § ACKNOWLEDGMENTS The authors gratefully acknowledge the support for this research from the National Key Research and Development Program of China (2021YFE0107400), the Research Grants Council of the Hong Kong SAR (C5018-20GF), and the Singapore Ministry of Education (MOE) Tier 1 Grants: A-0008301-01-00 and A-8000139-01-00. § REFERENCES elsarticle-num
http://arxiv.org/abs/2307.02794v1
20230706061052
A Testbed To Study Adversarial Cyber-Attack Strategies in Enterprise Networks
[ "Ayush Kumar", "David K. Yau" ]
cs.CR
[ "cs.CR" ]
[email protected] 0000-0002-5174-9906 Singapore University of Technology and Design [email protected] Singapore University of Technology and Design In this work, we propose a testbed environment to capture the attack strategies of an adversary carrying out a cyber-attack on an enterprise network. The testbed contains nodes with known security vulnerabilities which can be exploited by hackers. Participants can be invited to play the role of a hacker (e.g., black-hat, hacktivist) and attack the testbed. The testbed is designed such that there are multiple attack pathways available to hackers. We describe the working of the testbed components and discuss its implementation on a VMware ESXi server. Finally, we subject our testbed implementation to a few well-known cyber-attack strategies, collect data during the process and present our analysis of the data. <ccs2012> <concept> <concept_id>10002978</concept_id> <concept_desc>Security and privacy</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Security and privacy A Testbed To Study Adversarial Cyber-Attack Strategies in Enterprise Networks David K. Yau August 1, 2023 ============================================================================= § INTRODUCTION Cyber-attacks are a major factor affecting the operations of enterprises, government organizations, critical infrastructures, SMEs, etc. Not only do they threaten to disrupt those operations, the attacks are a leading cause of financial losses to the affected organizations<cit.>. A number of cyber-attacks are targeted at enterprise networks belonging to SME (Small and Medium Enterprises) as well as large enterprises, e.g., to steal valuable business data. When an adversary targets such a network, their attack strategy leaves a footprint across the network and the devices connected to it. By attack strategy, we refer to actions such as how an adversary gains access to a target network, how it moves across the network, how it looks for vulnerable machines connected to the network and exploits them, etc. If data is collected from the target network when under attack by an adversary, it can be analyzed for patterns. Further, those patterns can be used to detect future cyber-attacks. Data collected from our testbed can also be used to validate frameworks modelling the motivations, cognitive antecedents and dynamic decision making processes of hackers in the lead up to as well as during cyber-attacks which can be helpful in predicting them. As we can not allow an adversary to attack a real enterprise network, it makes sense to build a testbed environment that emulates an enterprise network and let adversaries attack the testbed instead. Hence, in this work, we present a controlled testbed environment to emulate enterprise networks and capture adversarial attack strategies. The testbed consists of nodes configured to serve functions similar to the machines that can be found in a real-world enterprise network. Some of the nodes are configured with known security vulnerabilities. Adversaries can be allowed to attack the testbed network by exploiting the vulnerabilities and the resulting system and network-level data can be collected using an in-built data logging infrastructure. This testbed is targeted at cyber security researchers who can easily and quickly bootstrap experiments. It also offers flexibility in terms of changing the testbed network topology, modifying existing nodes and their functionalities or increasing the number of nodes when required. § RELATED WORK There are only a handful of studies on building enterprise network testbeds in existing literature. Few works<cit.> have proposed testbeds using a node virtualization approach to emulate real-world enterprise networks which can be used for analysing worm/malware propagation. These testbeds are supposed to be used to evaluate/validate detection and defense techniques against malware/worm propagation. However, they are more focused on a specific cyber-attack (malware, worms) and emulating the topology and number of nodes in real-world enterprise networks. In comparison, our focus is on capturing the architecture of real-world enterprise networks. Moreover, our proposed testbed is aimed at studying the attack strategies of cyber adversaries and using the patterns extracted from them to design cyber-attack detection methods in enterprise networks. In <cit.>, the authors have presented a testbed to evaluate their proposed fault-localization system, Sherlock, meant to be deployed in large enterprise networks. The testbed consists of two LANs connected by a router, with each LAN containing web servers, SQL backend server, DNS servers, authentication servers and clients. Their testbed is closest to ours in terms of architecture, though its purpose is entirely different. § TESTBED OVERVIEW In this section, we begin by giving an overview of the architecture and configuration of our proposed testbed followed by listing down the security vulnerabilities used to infect some of the testbed nodes. §.§ Architecture and Configuration Large enterprises and SMEs (Small and Medium Enterprises) have different network architectures. Therefore, we have proposed two sets of architectures for our testbed representing large enterprises and SMEs' networks as shown in Fig. <ref> and Fig. <ref>. The testbed machines and their functions are listed below: * Jump point/host: This is the machine through which a hacker gains entry into our testbed network. * Web server: This machine acts as a web server hosting a fictitious company's web page. * Application server: This machine is configured as a server hosting a web application with a MySQL database at its back-end. Through the web application, employees can login with their credentials to retrieve and update their employment-related details saved with the company. The database also stores the VPN (Virtual Private Network) credentials of the employees which are used to connect to hosts connected to a remote company site. * VPN client: This machine acts as a VPN client which is used to connect to company's VPN network. * VPN server: This machine runs a VPN server, waiting for connection requests from VPN clients. The TLS/SSL VPN supports packet encryption, certificate-based server validation, client authentication and multiple clients. * VPN host: This machine acts as a host accessible from the VPN network only. It stores classified files and documents on the company's products. * DNS server: This machine runs a local DNS server, which maps internal company domains to IP addresses as well as handles external DNS queries. * File server: This machine runs a local file server storing important files which can be accessed by the company's employees. * Logging server: This machine is the centralized point where system logs and packet traces from all the machines connected to the testbed network are collected. It has two modules: * A packet traffic capture utility (e.g., Wireshark, Tcpdump) runs on the logging server in promiscuous mode so that it listens to all the packet traffic on the testbed network being sent or received by the other machines. * The system log (syslog) files from the all the machines are piped to the logging server using rsyslog utility. * Decoy hosts: These machines are meant to confuse the hackers and delay their attack on target machines. §.§ Security Vulnerabilities We have included several real-world security vulnerabilities in the testbed nodes which can be exploited by hackers to accomplish their attacks on our testbed. The security vulnerabilities are listed below: * The web server is configured with a weak login password which hackers should be able to obtain using password brute forcing. * The MySQL database at the backend of the application server is vulnerable to SQL injection. This means that using specially crafted inputs to the web application, hackers can retrieve all or some of the employees' sensitive information stored in the database. * The VPN server uses only password-based authentication to authenticate clients which can result in hackers obtaining access to the VPN network using stolen employee VPN credentials. * The DNS server is vulnerable to cache poisoning in which a hacker can spoof responses from other DNS servers to redirect employees to malicious domains. * The file server is vulnerable to remote command execution, i.e., a hacker can obtain a reverse-shell to the file server. § HACKING THE TESTBED In this section, we present some of the possible pathways which can be taken by hackers in our testbed. §.§ Attack pathways In our testbed, hackers have been provided choices in terms of nodes with different functionalities and their associated security vulnerabilities. It is up to the hacker to figure how to reach the nodes, decide whether to attack them and the kind of attack to carry out. We now outline some of the possible attack pathways that can be followed in our testbed by three types of hackers: hacktivists<cit.>, petty thieves<cit.> and black-hats<cit.>. §.§.§ Hacktivist pathways Hacktivists are highly skilled and generally tend to disrupt/damage systems or leak confidential information. Therefore, they can target both SME and large enterprise networks, though in the latter case they may have to use malware to spread and infect machines, use privilege escalation techniques and move across the company network (LAN2 → LAN1). Hacktivists may attempt to deface the website hosted on web server by changing its contents, or leak private employee details online, or change the contents of the database at the back-end of the application server, or disable the web server itself, or deny DNS service, or disable the file server. The pathways corresponding to hacktivist are shown in Fig. <ref>. §.§.§ Petty Thief pathways Since petty thieves are mostly financially motivated and low to medium-skilled, they would target an SME network rather than a large enterprise network. They may attempt to obtain e-mail and phone records of the employees from the database at the back-end of application server. The pathways corresponding to petty thief are shown in Fig. <ref>. It is to be noted here that some of the actions mentioned in the figure can not be captured by our testbed itself and have to be enabled separately. §.§.§ Black-hat pathways Black-hats are highly skilled and are typically after or hired to steal high-worth information. Targeting large enterprise networks, similar to hacktivists, they may have to use malware to spread and infect machines, use privilege escalation techniques and move across the company network (LAN2 → LAN1). Once they have established access to both the LANs, they may attempt to steal confidential high-value product files (e.g., source code) stored on hosts connected to the company's remote-site VPN network, or send phishing emails with ransomware/malware attachments from legitimate employee e-mail accounts to the e-mail address of a high-value target such as the company CEO/CTO (to make them look convincing) and extract valuable information once they get access to the target's computer. The pathways corresponding to black-hat are shown in Fig. <ref>. § PRELIMINARY DATA COLLECTION AND ANALYSIS We implemented our proposed testbed on a VMware ESXi server. The same testbed can also be configured on a cyber security experimentation platform such as DETERlab. The testbed machines are Virtual Machines (VMs) configured on the ESXi server and running either Ubuntu 16.04, Ubuntu 20.04 or Windows 10 OS. Most VMs are configured with ~8GB of RAM, 4 virtual CPUs and few hundreds of GBs of hard disk. The web servers are built using Apache2, database using MySQL, DNS server using BIND9 and file server using Samba. All the VMs except the VPN host have their system times synchronized to a local NTP (Network Time Protocol) server. The VPN server broadcasts the time on the VPN network which is received by VPN host. All the VMs have an administrator account (representing the company's IT admin account) and few VMs have a local user account (representing the employee's account). A virtual NAT router provides access to the Internet for all the testbed VMs. One of the VMs is also configured with an OPNsense firewall to manage Internet access for the other VMs. By default, external Internet access is blocked for all the VMs except the jump host. One of the first actions by any hacker who targets a network, enterprise or not, is to find other hosts connected to the same subnet. We therefore run an open-source network scanning tool, nmap on the jump host using the subnet (10.0.2.0/24). It is reflected in the packet capture collected on logging server as shown in Fig. <ref>. The jump host makes TCP connection requests to all the connected hosts in the subnet, followed by a three-way handshake ([SYN], [SYN,ACK], [ACK]) for each established connection. It also makes DNS queries to the Google DNS server (IP address: 8.8.8.8) to obtain domain name mappings for the connected hosts if any. Thus, if in an enterprise network, there is an abnormal increase in number of TCP connection requests to hosts or number of DNS queries, it is likely due to someone trying to scan the network. If a hacker find the web server, he/she may attempt to brute-force its credentials to login to the server via SSH using tools such as metasploit/hydra/nmap-scripting engine. Login credential brute-forcing is a common tactic employed by hackers when they suspect a vulnerable web server. As seen from the packet capture at logging server displayed in Fig. <ref>, such as action would result in multiple TCP connection requests from the jump host to the target web server, followed by a three-way handshake ([SYN], [SYN,ACK], [ACK] packets) for each established connection. It also leads to client-server Diffie-Hellman key exchanges followed by forwarding of encrypted packets containing username and password from client to server. As most of the password brute-force attempts are unsuccessful and SSH allows a maximum of 6 authentication attempts per connection, it leads to several connection resets ([RST] packets). This means that if there is an abnormal increase in number of TCP connection requests to a host (particularly a web server), DH key exchanges and TCP connection resets, it is likely due to someone brute-forcing SSH login credentials for the host. A hacker may also find the web application server with database at its back-end by probing the testbed network. Since SQL injection is one of the most common attacks carried out by hackers against databases, we run an open-source penetration testing tool for automated SQL injection vulnerability detection and exploitation, sqlmap on the jump host to find if the web application database is vulnerable to SQL injection and subsequently, dump the complete database. A packet capture during the attack (Fig. <ref>) shows multiple TCP connection requests followed by HTTP GET requests sent from the jump host to the web application server. It can thus be inferred that an anomalous increase in the number of TCP connection requests and HTTP GET requests to a host (particularly a web application) is indicative of someone dumping the back-end database. § CONCLUSION We have proposed a testbed to analyse the attack strategies of cyber adversaries in enterprise networks and use them for attack detection. The testbed network consists of several nodes with different functionalities, with some of the nodes infected with security vulnerabilities. The testbed presents multiple pathways to a hacker invited to attack the testbed. We discuss an implementation of our proposed testbed, subject it to few well-known cyber-attack strategies and analyse the data collected. Our preliminary analysis strengthens the initial argument presented earlier that data collected from the testbed can be used to capture patterns in attack strategies deployed by adversaries in enterprise networks. ACM-Reference-Format
http://arxiv.org/abs/2307.02925v1
20230706112346
Negative radiation pressure in Bose-Einstein condensates
[ "Dominik Ciurla", "Péter Forgács", "Árpád Lukács", "Tomasz Romańczukiewicz" ]
nlin.PS
[ "nlin.PS", "hep-th" ]
[email protected] Institute of Theoretical Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland Wigner RCP RMI, H1525 Budapest, POB 49, Hungary Institut Denis-Poisson CNRS/UMR 7013, Université de Tours, Parc de Grandmont, 37200 Tours, France Durham University, Department of Mathematical Sciences, Stockton Road, Durham, DH1 3LE, United Kingdom Wigner RCP RMI, H1525 Budapest, POB 49, Hungary Institute of Theoretical Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland In two-component non-linear Schrödinger equations, the force exerted by incident monochromatic plane waves on an embedded dark soliton and on dark-bright-type solitons is investigated, both perturbatively and by numerical simulations. When the incoming wave is non-vanishing only in the orthogonal component to that of the embedded dark soliton, its acceleration is in the opposite direction to that of the incoming wave. This somewhat surprising phenomenon can be attributed to the well known “negative effective mass” of the dark soliton. When a dark-bright soliton, whose effective mass is also negative, is hit by an incoming wave non-vanishing in the component corresponding to the dark soliton, the direction of its acceleration coincides with that of the incoming wave. This implies that the net force acting on it is in the opposite direction to that of the incoming wave. This rather counter-intuitive effect is a yet another manifestation of negative radiation pressure exerted by the incident wave, observed in other systems. When a dark-bright soliton interacts with an incoming wave in the component of the bright soliton, it accelerates in the opposite direction, hence the force is “pushing” it now. We expect that these remarkable effects, in particular the negative radiation pressure, can be experimentally verified in Bose-Einstein condensates. Negative radiation pressure in Bose–Einstein condensates Tomasz Romańczukiewicz August 1, 2023 ========================================================== § INTRODUCTION In this paper we consider the interaction of solitons and their radiation field in a two component, non-linear Schrödinger equation (NLSE) in one dimension. The NLSE is a widely used model, for example in non-linear optics <cit.>, and in particular it serves to describe Bose-Einstein condensates (BECs) of neutral atoms. The motivation of our enterprise is to point out some simple, but somewhat surprising physical phenomena, which are hopefully experimentally observable in BECs. BECs were realised experimentally for the first time in 1995 <cit.>, and have been produced in numerous experiments ever since. Many BECs can be described in a mean-field approximation, leading to the NLSE, for which classical field-theoretical methods are appropriate. Moreover, in a number of situations, the dynamics of solitons in a BEC can be well approximated by restricting the dynamics to one spatial dimension. Often, the trap used in experiments can be approximated by a harmonic potential, and choosing its frequencies in the two chosen dimensions to be much larger than in the remaining third dimension, an effective, quasi-1D cigar-shaped condensate is achieved <cit.>. BECs with two distinguishable components are described in a mean-field approximation by coupled nonlinear Schrödinger equations (CNLSE) <cit.>. Experimentally, such two-component BECs can be achieved either by mixing two different atomic species, e.g., ^41K and ^87Rb <cit.> (m_1/m_2 ≈ 0.47) or by using two different spin states of the same species <cit.>. Although different kinds of solitons can be found in this system, we focused our considerations on the so-called dark-bright <cit.> and dark <cit.> solitons. In the present work we show that in the two-component CNLSE, the interaction of dark and dark-bright solitons with incoming small amplitude plane waves can be reasonably well described by standard scattering theory. We derive the force acting on the solitons in terms of scattering data. We note that the value of the coupling, g_12, between the two components of the CNLSE plays an important rôle, since for g_12=1 (in suitable units) the system is integrable <cit.>, and in this special case the net force exerted by incoming waves on solitons is zero. In fact, for the integrable case, exact solutions corresponding to nonlinear superposition of cnoidal waves and solitons have been constructed <cit.>. We find that quite generally, for g_121 in two-component CNLS systems, an incoming plane wave can exert a pulling force on certain solitons – referred to as “tractor beam” effect, or “negative radiation pressure” (NRP). As it has been already demonstrated for a number of cases in one and two dimensions, in the presence of two scattering channels with different dispersion relations, an incoming plane wave can exert a pulling force on the scatterer <cit.>. In the linearized approximation the force acting on the soliton can be easily found from momentum conservation. For the case of main interest for us, when an incoming plane wave of amplitude and wave number a, k_1, is nonzero only in one component, say 1 (dark), then the induced force can be written as: F = a^2 ( k_1^2 (1 + R_1 - T_1 ) + (k_2^+ )^2 (R_2^+- T_2^+ ) + (k_2^- )^2 (R_2^- - T_2^- ) ), where R_i resp. T_i denote the transmission resp. reflection coefficients for an incoming wave into channel i and k_2^± are the two possible wave numbers in the bright component. We note that the dynamics due to a force acting on dark resp. dark-bright solitons is somewhat counter-intuitive, since the direction of the force and that of the resulting acceleration points in opposite directions because of their effective negative mass. The paper is organized as follows. We review briefly some of the main properties of two-component CNLSE (Sec. <ref>), exhibit the expressions for the energy and field momentum (Sec. <ref>). Next, the linearized equations of motion around a soliton are presented in (Sec. <ref>). Next, we introduce the general notion of the Newtonian approximation using the effective mass and force (Sec. <ref>). After that, we proceed to apply these ideas to the specific cases: dark and dark-bright solitons with a small wave in each component separately (Secs. <ref> and <ref>). Most importantly, we derive the acceleration of the solitons using an effective model and compare it with numerical simulations of the full CNLSE. § THE MODEL §.§ Coupled nonlinear Schrödinger (Gross–Pitaevskii) equation In the mean-field regime, a one-dimensional two-component Bose–Einstein condensate can be described by two coupled nonlinear Schrödinger equations (CNLSE), also called Gross–Pitaevskii equations, of the form <cit.> i ħ∂_t ψ_1 = - 1/2 m_1∂_xxψ_1 + ( g_11 |ψ_1|^2 + g_12 |ψ_2|^2)ψ_1 + V_1(x)ψ_1, i ħ∂_t ψ_2 = - 1/2 m_2∂_xxψ_2 + ( g_22 |ψ_2|^2 + g_12 |ψ_1|^2)ψ_2 + V_2(x)ψ_2, where ψ_i (i=1,2) denote the (complex) wave functions of the two components of the condensate, m_i are their masses and V_i are the trapping potentials experienced by the i-th component. The couplings can be written as g_ij = 2πħ^2a_ij/m_ij, where a_ij denote the s-wave scattering lengths between the two components (or within one component in the case of a_ii), and m_ij = m_i m_j/(m_i + m_j) is the reduced mass. Positive values of a_ij (and therefore also of g_ij) correspond to repulsive interaction between the components i and j, whereas a negative value corresponds to the interaction being attractive. In order to simplify the problem, we reduce the number of parameters used. First, we consider a condensate made of two different spin states of the same atomic species; therefore, m_1 = m_2, and we can rescale m_i = 1. Second, we set g_ii = 1 and keep g_12 as a free parameter. The former is justified, because the ratio of scattering lengths in experiments is often close to one, e.g., in the mixture of the |2,1⟩ and |1,-1⟩ states of ^87Rb without additional tuning it is a_11/a_12/a_22 = 1.03/1/0.97 <cit.>, or for |1,-1⟩ and |2,-2⟩ states: a_11/a_12/a_22 = 1.01/1/1 <cit.>. Scattering lengths can be manipulated (both their magnitudes and their signs) using Feshbach resonances <cit.>. In particular, a_12 can be tuned independently <cit.>, and therefore different values of the g_12 coefficient are achievable in experiments. Finally, we assume that V_i = 0, which should be a valid first approximation, provided that the trap is sufficiently large. After these simplifications and putting ħ = 1, the set of equations (<ref>) takes the form i ∂_t ψ_1 = - 1/2∂_xxψ_1 + ( |ψ_1|^2 + g_12 |ψ_2|^2)ψ_1 , i ∂_t ψ_2 = - 1/2∂_xxψ_2 + ( |ψ_2|^2 + g_12 |ψ_1|^2)ψ_2 . The system of equations (<ref>) has various type of solitonic solutions, see the review <cit.>. The simplest solutions correspond to the embedding of a scalar soliton into one of the components. We shall consider embedded dark solitons, for which the probability density has a dip, and dark-bright (DB) solitons, having a dip and a peak of the probability density in one, resp. in the other component of (ψ_1 ,ψ_2). When g_12 = 1, Eqs. (<ref>) correspond to the Manakov system <cit.> which is known to be integrable, c.f. also Ref. <cit.>. For the integrable case, the DB solitons are known analytically, while for values of g_121 we have solved Eqs. (<ref>) numerically. For such values of g_121, we have shown analytically and also confirmed by numerical simulations that incoming sound waves (referred to as “radiation”) do exert a force on the solitons. We have found that an incoming wave pushes a dark-bright soliton in its direction of propagation. At first sight, this may correspond to what one would expect. However, taking into account that the effective mass of the DB soliton is negative, the DB soliton accelerates in the direction opposite to that of the force. We refer to such a situation as negative radiation pressure (NRP). We remark that in the literature on NRP <cit.>, in the systems considered up to now only positive masses occured; therefore the NRP exerted by incident plane waves has manifested itself by a pulling effect. §.§ Integrals of motion The Lagrangian density corresponding to the CNLSE (<ref>) is <cit.>: ℒ = 1/2∑_i=1,2[ i ( ψ_i^* ∂_t ψ_i - ψ_i ∂_t ψ_i^* ) - |∂_x ψ_i|^2 - |ψ_i|^4 - g_12 |ψ_1|^2 |ψ_2|^2 ] . Using the symmetries of the Lagrangian (<ref>) and the Noether theorem (details in the Appendix <ref>), the total energy and momentum are derived as E = 1/2∑_i=1,2∫_-∞^∞( |∂_x ψ_i|^2 + |ψ_i|^4 + g_12 |ψ_1|^2 |ψ_2|^2 ) dx , P = i/2∑_i=1,2∫_-∞^∞(ψ_i ∂_x ψ_i^* - ψ_i^* ∂_x ψ_i) dx , and it is shown that they obey the equations ∂_t E = 1/2∑_i=1,2. ( ∂_x ψ_i^* ∂_t ψ_i + ∂_x ψ_i ∂_t ψ_i^* ) |_-∞^∞ , ∂_t P = 1/2∑_i=1,2. (- i ( ψ_i^* ∂_t ψ_i - ψ_i ∂_t ψ_i^* ) - |∂_x ψ_i|^2 + |ψ_i|^4 + g_12 |ψ_1|^2 |ψ_2|^2 ) |_-∞^∞ . It is also worth noting that the solutions of Eq. (<ref>) obey the following continuity equations (even if we include the trapping potential): ∂_t |ψ_i|^2 + ∂_x J_i = 0 , where J_i = i/2(ψ_i ∂_x ψ_i^* - ψ_i^* ∂_x ψ_i) . Note that P = ∑_i=1,2∫_-∞^∞ J_i dx. Integrating Eq. (<ref>) over the whole space and using the Newton-Leibniz theorem, we obtain ∂_t ∫_-∞^∞ |ψ_i|^2 dx = - . J_i |_-∞^∞ . We can choose the normalization as ∫ |ψ_i(x,t)|^2 dx = N_i, where N_i is the number of atoms in the i-th component. These particle numbers are conserved. §.§ Linearization around a soliton We shall consider stationary solitons of Eqs. (<ref>) of the form ψ_i(x,t) = e^- i μ_i tΦ_i(x), where Φ_i(x) are real functions satisfying the following equations: μ_1 Φ_1 = - 1/2∂_xxΦ_1 + ( |Φ_1|^2 + g_12 |Φ_2|^2)Φ_1 , μ_2 Φ_2 = - 1/2∂_xxΦ_2 + ( |Φ_2|^2 + g_12 |Φ_1|^2)Φ_2 . If the wave function is normalized to the number of atoms in each component, then μ_i are determined from these normalization conditions, and they are interpreted as chemical potentials <cit.>. Let us consider a small perturbation of a soliton solution of Eqs. (<ref>), of the form: ψ_i(x,t) = e^- i μ_i t( Φ_i(x) + a ξ_i(x,t) ), where the parameter of the perturbation, a≪ 1. Moreover, let us make an ansatz ξ_i(x,t) = e^i ω̃ tξ_i^+(x) + e^-i ω̃ tξ_i^-(x). After inserting these Ansätze into Eq. (<ref>) and keeping only terms linear in a, we obtain that ξ_i^- and ξ_i^+ satisfy: (-1/2∂_xx + 𝐌) Ξ = 𝐝𝐢𝐚𝐠(μ_1 + ω̃, μ_1 - ω̃, μ_2 + ω̃, μ_2 - ω̃) Ξ, where 𝐝𝐢𝐚𝐠 means diagonal matrix, Ξ stands for the vector Ξ = (ξ_1^-, ξ_1^+^*, ξ_2^-, ξ_2^+^*)^T and 𝐌 = ( 2 Φ_1^2 + g_12Φ_2^2 Φ_1^2 g_12Φ_1 Φ_2 g_12Φ_1 Φ_2 Φ_1^2 2 Φ_1^2 + g_12Φ_2^2 g_12Φ_1 Φ_2 g_12Φ_1 Φ_2 g_12Φ_1 Φ_2 g_12Φ_1 Φ_2 g_12Φ_1^2 + 2 Φ_2^2 Φ_2^2 g_12Φ_1 Φ_2 g_12Φ_1 Φ_2 Φ_2^2 g_12Φ_1^2 + 2 Φ_2^2 ). We shall consider a setup consisting of a soliton with a wave incoming from - ∞ in one of the two components moving to the right. It corresponds to equation (<ref>) with a interpreted as the amplitude (and the appropriate boundary conditions discussed later). Using the linearization described above, the wave can be written as a e^- i μ_i tξ_i(x, t) = a e^- i (μ_i - ω̃) tξ_i^+(x) + a e^- i (μ_i + ω̃) tξ_i^-(x) and we can denote its frequencies as ω_i^± = μ_i ∓ω̃, which correspond to these in the r.h.s. of Eq. (<ref>). Sometimes we shall loosely refer to ω̃ as the frequency, but the true frequencies of the wave are given by ω_i^±. We assume that asymptotically these waves have a form of monochromatic plane waves. We shall define transmission and reflection coefficients, separately for each of the examples, as coefficients of the asymptotic plane wave modes in a solution. § NEWTONIAN MOTION AND THE EFFECTIVE MASS OF SOLITONS We expect both the dark and the dark-bright solitons of the NLSE to behave as Newtonian particles in the first approximation, albeit with unusual dynamics, due to their negative effective masses. In this present context, we refer to Ref. <cit.> on the motion of dark solitons, and for a recent review on the dynamics of solitons in the vector NLSE see Ref. <cit.>. More precisely, we shall assume that in the presence of perturbations, the solitons do not change their shape, and that we can treat the centre of the soliton x_0 solely as a function of time, reducing the problem to one-dimensional classical dynamics. In this description, x_0(t) is expected to obey the equation M ẍ_0(t) = F , where M, resp. F is the effective mass resp. force. These quantities are obtained from the integrals of motion discussed in Section <ref>. The description of the dynamics of the dark soliton is complicated by the fact, that the wavefunction describes the soliton on the top of a constant background <cit.>. The energy, E, and the momentum P of the dark soliton have to be defined carefully, they have to be “renormalized” in order to subtract the contribution from the background <cit.>. The renormalized quantities P_s and E_s will be given separately for the dark and dark-bright soliton. See Secs. <ref> and <ref>. A useful definition of the effective mass, M, from the “renormalized” total energy, E_s, resp. momentum, P_s, for a soliton moving with velocity v, is given as: M = . d^2 E_s/ dv^2|_v=0 = . d P_s/ dv|_v=0 . If the soliton is indeed moving according to Newton's law, M computed from E_s should match that derived from P_s. The effective force exerted by the sound waves on the soliton can be obtained as the time derivative of the total momentum, ∂_t P. Since the “renormalization corrections” are time-independent, ∂_t P=∂_t P_s. We shall be interested in the force averaged over a period of the incoming wave, thus we are led to define the effective force as: F = ⟨∂_t P ⟩_T , where P is the total momentum including the radiation and ⟨·⟩_T is the average over a period. We note that to evaluate Eq. (<ref>) it is sufficient to know the asymptotic form of the radiation in order to compute the effective force. In the computation of F, we can only keep the contributions of order a^2 since we have the solution up to linear order in the amplitude. In this linearized approximation (which turns to be quite efficient), one can easily obtain the results for any incoming wave-form. § DARK SOLITON §.§ The dark soliton solution A particular solution of Eq. (<ref>) with arbitrary g_12 is a (scalar) dark soliton centered at x_0 <cit.>, embedded into the vector NLSE: ψ_1 = e^-i μ t√(μ)tanh (√(μ) (x-x_0)) , ψ_2 = 0 , with chemical potential, μ_1 = μ>0. Such a soliton can be understood as a dip in the probability density obtained from the collective wave function of atoms in the condensate. In order to examine the effective force exerted by an incoming plane wave on a dark soliton, we need to analyze the linearized equations. The relations between ω̃ (see section <ref>) and the wave numbers, k_i, is obtained from the x→±∞ asymptotic form of equation (<ref>). Knowing that Φ_1(x) →±√(μ) and Φ_1”(x) → 0 as x →±∞, the asymptotic form of the matrix from Eq. (<ref>) is 𝐌_x →±∞ = μ( 2 1 0 0 1 2 0 0 0 0 g_12 0 0 0 0 g_12) . The linearized equations (<ref>) with this 𝐌 can easily be diagonalized and solved. Since the non-propagating waves do not carry momentum, we shall omit them in our considerations, and then we obtain: ξ_1^-(x) = A e^i k_1 x + B e^-i k_1 x , ξ_1^+(x) = (-k_1^2/2-ω̃/μ -1 ) (A^* e^-i k_1 x + B^* e^i k_1 x) , where A and B are arbitrary constants, and k_1 = √(2)√(√(μ^2 + ω̃^2) - μ) . Note that for ω̃≠ 0 the wavenumber, k_1, is real thus Eqs. (<ref>) describe propagating waves. As mentioned earlier, we shall consider waves coming from the left and moving to the right. Assuming an incoming wave with e^i k_1 x for ω_1^+ and (to be consistent with the above solution) e^-i k_1 x for ω_1^-, the condition for moving to the right is that ω_1^± = μ∓ω̃ is positive/negative respectively (cf. Eqs. (<ref>) and (<ref>)). This implies the conditions ω̃ < μ for ω_1^+ and ω̃ < - μ for ω_1^-. Therefore, for ω̃ < - μ both waves propagate to the right. In the second component, the equations are already diagonal. Note that in this case μ_2 is arbitrary and choosing it can be interpreted as fixing a reference point for ω̃. For simplicity, let us put μ_2 = 0, then we can interpret ∓ω̃ = ω_2^± simply as the frequency of the incoming wave in the second sector. Then the solutions are ξ_2^-(x) = C e^i k_2^- x + D e^-i k_2^- x , ξ_2^+(x) = E e^i k_2^+ x + F e^-i k_2^+ x , where k_2^± = √(2)√(∓ω̃ - μ g_12) , and C, D, E, F are arbitrary constants. These waves propagate when k_2^± is real (and nonzero), that is, when ∓ω̃ > μ g_12. This means that for g_12 < 0 there exists a range of ω̃ in which both waves can propagate with the same ω̃ and for g_12≥ 0 with fixed ω̃ only one (or neither) of the waves can propagate. Using the same logic as for the first component (assuming an incoming wave with e^i k_2^± x), we get the conditions for waves moving to the right in the second component. Namely, ω̃ < 0 for ω_2^+ and ω̃ > 0 for ω_2^-. This means that only one of them moves to the right for a given ω̃. In order to find the effective mass of the soliton, we repeat the derivation of the renormalized momentum done in <cit.> and obtain the effective mass from it following <cit.>. First, we consider the total momentum, P_s, of a scalar dark soliton moving with a constant velocity v: ψ_1 = e^-i μ t(i v + √(μ - v^2)tanh (√(μ - v^2) (x-x_0-vt)) ) , ψ_2 = 0 . Note, that the moving soliton becomes shallower and shallower as the velocity increases, finally vanishing when v^2 = μ, which defines its maximal velocity. However, the above wavefunction describes a dark soliton on top of a background, and we are interested in the total momentum of the soliton. Let us note that the solution with constant probability density, corresponding to the asymptotics of (<ref>) (i.e. |ψ_1|^2 = μ and |ψ_2|^2 = 0) is of the form ψ_1 = √(μ) e^-i (μ + q^2/2) t e^i q x , ψ_2 = 0 , with some real q. Since we are interested in a non-moving background, we choose q=0, yielding the background part of the dark soliton. However, we have to take into account the phase change induced by the presence of the soliton (see <cit.>). Therefore, we assume that the background (in the first component) has the form: ψ_b = √(μ) e^-i μ t e^i k(x) x , where k is some real function of x, reflecting the phase change induced by a soliton. It will turn out that the explicit form of k(x) is not needed. Although the probability density of the background is constant (equal to μ), the total momentum contribution also depends on the phase. Inserting Eq. (<ref>) to Eq. (<ref>) one obtains that the contribution of the background can be written as: μΔϕ≡μ∫_-∞^∞( k(x) + x k'(x) ) dx = μ x k(x)_-∞^∞ . Comparing with Eq. (<ref>), Δϕ is readily identified with the induced phase change of the background. Therefore, the total momentum of the `pure soliton' can be expressed as (cf. <cit.>) P_s = i/2∫_-∞^∞(ψ_1 ∂_x ψ_1^* - ψ_1^* ∂_x ψ_1 ) dx - μΔϕ . The phase change, Δϕ, can be easily computed from the asymptotics of Eq. (<ref>): Δϕ = - 2 arctan( √(μ - v^2)/v) . Using this, we obtain from Eq. (<ref>) the momentum corresponding to the soliton: P_s = - 2 v √(μ - v^2) + 2 μarctan( √(μ - v^2)/v) , which allows us to compute its effective mass as (cf. <cit.>) M = . d P_s/d v|_v → 0 = -4 √(μ) . Intuitively, the negative sign is not a surprise, because a dark soliton is, as mentioned before, a dip in the collective probability density of atoms. The same mass is obtained using the renormalized energy <cit.> (see section <ref>) E_s = 1/2∫_-∞^∞( |∂_x ψ_1|^2 + (|ψ_1|^2 - μ)^2 ) dx . §.§ Wave in the second component Let us now consider the interaction of an embedded dark soliton in the 1st component and an incoming wave in the second one. In this case one can obtain the analytic solutions of the linearized equations for the waveform in the second component. Since in the case of an embedded dark soliton, (<ref>), the linearized equations (<ref>) for ξ_1 and ξ_2 are decoupled from each other, we may simply put ξ_1 = 0. Then Eqs. (<ref>) are reduced to: - 1/2∂_xxξ_2^-(x) + g_12μtanh^2(√(μ) x) ξ_2^-(x) = ω̃ξ_2^-(x) , - 1/2∂_xxξ_2^+(x) + g_12μtanh^2(√(μ) x) ξ_2^+(x) = - ω̃ξ_2^+(x) . Regular solutions of Eq. (<ref>) can be expressed in terms of associated Legendre functions of the first kind: ξ_2^±(x) = A P_λ^ik_2^±/√(μ)(tanh(√(μ) x) ) , where λ = 1/2( √(1 + 8 g_12) - 1 ) and A is a normalization factor. Since our boundary conditions correspond to a wave coming from x=-∞, we impose the following asymptotic behaviour on ξ_2: ξ_2^± (x) e^i k_2^± x + r_2^± e^-i k_2^± x , ξ_2^± (x) t_2^± e^i k_2^± x . This asymptotics can be ensured by choosing the normalization constant, A, in Eq. (<ref>) appropriately. The reflection resp. transmission coefficients are defined as R_2^± = |r_2^±|^2 resp. T_2^± = |t_2^±|^2. The reflection resp. transmission coefficients can be written as: R_2^± = 2 sin^2(πλ)/cosh(2 π k_2^±/√(μ)) - cos (2 πλ ) , T_2^± = 2 sinh^2(π k_2^±/√(μ))/cosh(2 π k_2^±/√(μ))-cos (2 πλ ) . One can check that the following relation is satisfied: R_2^± + T_2^± = 1 . Note that the reflection coefficient is zero not only for g_12 = 1, but for any value of g_12 such that λ is an integer. We now investigate the dynamics of a dark soliton embedded to the first component under the influence of an incident plane wave coming from x=-∞ embedded to the second component. We shall stick to the linearized approximation and we assume the amplitude of the wave, a, to be sufficiently small. Let us consider the setup in which only one of ξ_2^± waves propagates (therefore, we omit the non-propagating one, since it does not carry momentum), which in terms of full wavefunctions ψ_i has the following asymptotics: ψ_1(x,t) - √(μ) e^-i μ t , ψ_1(x,t) √(μ) e^-i μ t , ψ_2(x,t) a e^- i ω_2^± t (e^i k_2^± x + r_2^± e^-i k_2^± x) , ψ_2(x,t) a e^- i ω_2^± t t_2^± e^i k_2^± x . In order to approximate the acceleration of the soliton, we shall assume Newtonian motion, with the force stemming from the radiation pressure, averaged over a period of the incoming wave. The force is derived from Eq. (<ref>). Firstly, we substitute the above asymptotic form into this equation. Then we average it over time for the period T = 2π/ω̃ (which in this case does not change anything) and omit the terms of the order higher than a^2 (since a is small). Finally, we substitute ω̃ with the appropriate dispersion relation with k_2^±, derived in the previous subsection, obtaining the force: F = ⟨∂_t P ⟩_T = a^2 (k_2^±)^2 (1 + R_2^± - T_2^±) , where P is the total momentum and ⟨·⟩_T means the average over the period. Using the relation (<ref>) the force can be simplified to F = 2 a^2 (k_2^±)^2 R_2^±, therefore in this case reflectionlessness implies no force (of the assumed order a^2). In the case when ξ_2^± both propagate (but only one of them is the initial wave!) the analogous derivation leads to the force F = a^2 ( (k_2^±)^2 + (k_2^+)^2 (R̃_2^+ - T̃_2^+ ) + (k_2^- )^2 (R̃_2^- - T̃_2^-) ) , where the incoming wave has a wavenumber k_2^±. However, after considering the boundary conditions, R̃_2^± and T̃_2^± are equal to R_2^±, T_2^± from Eq. (<ref>) only for the incoming wave and for the other one are equal to zero. Therefore, the above expression ultimately reduces to Eq. (<ref>). If both are the incoming waves, only one of them is moving to the right for given ω̃ (see the discussion in <ref>) and also the effective force is simply a sum of individual forces, so it is not interesting at this point. Finally, using the reflection coefficient given by Eq. (<ref>), mass derived in the previous section and the above F, we can derive the explicit form of acceleration exerted on the scalar dark soliton in the first component by a wave with frequency ω̃ in the second component. For the incoming wave with a wavenumber k_2^± it is ẍ_0 = - a^2/√(μ)(k_2^±)^2 sin^2(πλ)/cosh(2 π k_2^±/√(μ)) - cos (2 πλ) , where x_0 is interpreted as the position of the the soliton. Note that although the force (<ref>) is always nonnegative, the acceleration is always nonpositive due to the negative effective mass (<ref>), therefore we observe either the positive radiation pressure (i.e., positive force) or no pressure at all. To verify the above results, we performed numerical simulations. The initial condition was a dark soliton in the first sector and a wave propagating from the left end of the interval with a given frequency and an appropriate wavenumber, where its amplitude was kept sufficiently small. However, the initial wave was multiplied by a superposition of hypebolic tangents to `cut' it smoothly, in order to have the initial wave beginning slightly after the left boundary and ending slightly before the centre of the soliton. This deviation from a plane wave shape introduces a short `kick' exerted on the soliton, and this results in a constant velocity, on top of which we observe the acceleration compared with Eq. (<ref>). More precisely, the wave in the second component had the form ψ_2(x,t=0) = a e^i k_2^+ x Φ_cut(x) , with parameters such that a wave with k_2^+ propagates to the right, i.e., ω̃ < 0 and ω̃ < - μ g_12. Φ_cut is the 'cutting' function mentioned before: Φ_cut(x) = 1/2( tanh(x - x_min - 10) - tanh(x + 10) ) , where x_min is the left boundary in space. The centre of the soliton was computed as the minimum of the probability density in the first component. Then, the acceleration was computed by fitting the quadratic function to the position of the centre and compared with Eq. (<ref>): see figures <ref>, <ref>, <ref> and <ref>. It turned out that our effective linearized model explains the observed accelerations quite well for a wide range of parameters. §.§ Dark soliton and a wave in the same component Since the linearised eqs. (<ref>) for ξ_1 and ξ_2, in the case of the dark soliton, are independent, we start with a similar ansatz as before, namely ξ_2 = 0. Now we examine the case where the wave with small amplitude a comes from -∞ in the first component, which (taking into account the solution (<ref>)) has the following asymptotics: ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t(e^i k_1 x + r_1 e^-i k_1 x) + a N e^- i ω_1^- t(e^-i k_1 x + r_1^* e^i k_1 x) - √(μ) e^-i μ t , ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t t_1 e^i k_1 x + a N e^- i ω_i^- t t_1^* e^-i k_1 x + √(μ) e^-i μ t , ψ_2(x,t) 0 , where N = √(2)μ/√((k_1^2+2 μ) (k_1 (√(k_1^2+4 μ)+k_1)+2 μ)) . The reflection and transmition coefficients are defined as R_1 = |r_1 |^2 and T_1 = |t_1 |^2 respectively. The coefficient N was chosen in such a way as to have the simplest form of the force. From the above asymptotics, the effective force acting on a soliton can be derived analogously as in the previous case, obtaining F = a^2 k_1^2 (1 + R_1 - T_1 ) . Even though ξ_2 = 0, linearized equations (<ref>) are more complicated here than in the previous section, therefore we resort to solving them numerically to obtain reflection and transmission coefficients. The infinities were approximated by sufficiently large L, then x ∈ [-L, L] (in general the grid is different from the one used in the full PDE simulations of CNLSE). We changed the basis from (ξ_1^-, ξ_1^+^*) to the solutions for which the asymptotic form of the equations (<ref>) is diagonal. Let us denote the solutions in the new basis as (ξ̃_1^-, (ξ̃_1^+)^*). Then, the boundary conditions were imposed in the following way: . ( ξ̃_1^-' - i k_1 ξ̃_1^- ) |_x = -L = - 2 i k_1 e^i k_1 L , to have e^-i k_1 x at x = - L and . ( ξ̃_1^-' + i k_1 ξ̃_1^- ) |_x = L = 0 , to ensure that there is no e^i k_1 x at x = L. This corresponds to the incoming wave with e^i k_1 x for ω_1^+ moving from left to right. (ξ̃_1^+)^* describes the nonpropagating waves and we imposed analogous boundary conditions on it: . ( (ξ̃_1^+)^*)^'- i k_im (ξ̃_1^+)^* ) |_x = -L = . ( (ξ̃_1^+)^*)^' + i k_im (ξ̃_1^+)^* ) |_x = L = 0 , with imaginary k_im = ±√(2) i √(√(μ^2 + ω̃^2) + μ) instead of k_1. Then, R_1 and T_1 were computed from the numerical solutions, using the equations R_1 = | ξ̃_1^-(-L) - e^-i k_1 L|^2 , T_1 = | ξ̃_1^-(L) |^2 . The acceleration, computed from the force (<ref>) with numerically obtained R_1, T_1 and the mass derived in the previous section, turned out to be close to zero. The above result can be compared with the full PDE simulations of CNLSE. Before we do that, let us discuss a particular difficulty present here. In the case of a scalar dark soliton with a wave in the second component, determining the center of the soliton from numerical data is relatively easy, since the wave and the soliton are in completely different components. Then, the center is simply given by a minimum of the probability density in the soliton component. In general, this is not the case because in many scenarios the waves propagate across different components. Therefore, developing the strategy of extracting the position of a soliton from numerical data will be important not only for the waves initially in the first component, but for most other setups as well. When a wave is present in the same component as the soliton, one can observe oscillations of the minimum (in the case of a bright soliton: maximum) of the probability density. These oscillations are due to the effect of the incident wave on the soliton, cf. Figs. <ref> and <ref>. This interesting effect is the analogue described by Quist effect <cit.> for vortices. In the present case it complicates the extraction of the position of the soliton from numerical data. To improve the determination of the soliton positions, we used filtering of high frequencies from |ψ(x, t)|^2 at each instant, t, before computing the minimum. Since the amplitude of these oscillations becomes smaller compared to the observed trajectories during the time evolution, it pays off to make time evolution as long as feasible. The initial condition in the numerical simulations was ψ_2(x,t=0) = 0 and ψ_1(x,t=0) = Φ(x) + a N ( ( -k_1^2/2-ω̃/μ-1 ) e^i k_1 x + e^-i k_1 x) Φ_cut(x) , where Φ is the dark soliton for t=0, Φ_cut is the same `cutting' function as in (<ref>) and N is the normalization given by (<ref>). We used ω̃ < - μ in order to have both e^± i k_1 x starting a wave moving to the right. Acceleration computed from Eq. (<ref>) divided by the appropriate effective mass with numerically obtained reflection and transmission coefficients has values close to zero. It was compared with the acceleration from the full PDE simulations (figures <ref> and <ref>), which is also small. This seems to indicate that the force is indeed approximately zero. § DARK-BRIGHT SOLITONS §.§ DB solutions The CNLSE equation (<ref>) with g_12 = 1 possesses a particular solution <cit.> ψ_1 = e^-i μ t√(μ)tanh (κ (x-x_0)) , ψ_2 = e^-i (μ - κ^2/2)t√(μ - κ^2)sech(κ (x-x_0)) , which is an example of a dark-bright soliton with μ_1 = μ , μ_2 = μ - κ^2/2 . Obviously, 0 < κ^2 < μ. Assuming ψ_i(x,t) = e^- i μ_i tΦ_i(x) we can find DB solitons for other values of the parameter g_12. Their profiles Φ_i are presented in fig. <ref>. They can be intuitively understood as a dip in the probability density of atoms of one kind (species or spin state) and a relatively small peak in the probability density of atoms of the other kind. Consider scattering on such solitons. It can be easily deduced from the Eq. (<ref>) that for x →±∞ DB solitons profiles Φ_1(x) →±√(μ), Φ_2(x) → 0 and Φ_i”(x) → 0 regardless of the value of g_12. Therefore, the matrix 𝐌 in Eq. (<ref>) becomes asymptotically 𝐌_x →±∞ = μ( 2 1 0 0 1 2 0 0 0 0 g_12 0 0 0 0 g_12) . Note that this asymptotic form is exactly the same as in the case of scalar dark soliton; therefore, the solutions are the same as in the section <ref>, in particular the wavenumber k_1 = √(2)√(√(μ^2 + ω̃^2) - μ) , and again, the waves in the first component propagate for any ω̃≠ 0. The only difference is that μ_2 is no longer arbitrary, therefore k_2^± = √(2)√(∓ω̃ - μ g_12 + μ - κ^2/2) . Thus, waves in the second component with wavenumber k_2^± propagate when ∓ω̃ > μ g_12 - μ + κ^2/2. This means that for κ^2 < 2μ (1 - g_12) there exists a range of ω̃ in which both waves can propagate with the same ω̃, however we were unable to find the stable solitons in this range. For κ^2 ≥ 2μ (1 - g_12) with fixed ω̃ only one (or neither) of the waves can propagate. Note that this excludes the possibility of propagating both types of waves in the second component for g_12≥ 1. Conditions for moving to the right in the first component are the same as for the scalar dark case. In the second component, they are ω̃ < μ - κ^2/2 for ω_2^+ and ω̃ > - μ + κ^2/2 for ω_2^-. Therefore, for ω̃∈ (- μ + κ^2/2, μ - κ^2/2) both waves propagate to the right (it is always true because μ - κ^2/2 is positive). The moving dark-bright soliton in the Manakov (g_12 = 1) case with velocity v is <cit.>: ψ_1 = e^-i μ t√(μ) ( cosαtanh (κ̃ (x - x_0 - v t)) + i sinα) , ψ_2 = e^-i (μ - κ̃^2 (1 - tan^2 α)/2)t e^i v x√((μ - κ^2) κ̃/κ) sech(κ̃ (x - x_0 - v t)) , where κ̃ = κ^2 - μ + √(2κ^2 μcos (2α) + κ^4 + μ^2)/2κ , v = κ̃tanα . We compute the renormalized total energy (analogously to the renormalized energy of a dark soliton): E_s = 1/2∫_-∞^∞( |∂_x ψ_1|^2 + |∂_x ψ_2|^2 + (|ψ_1|^2 - μ)^2 + |ψ_2|^4 + 2 |ψ_1|^2 |ψ_2|^2 ) dx , and from that we get the effective mass M = . d^2 E_s/ dv^2|_v=0 = . ( dv/ dα)^-2( d^2 E_s/ dα^2 - d^2 v/ dα^2 dv/ dα d E_s/ dα) |_α=0 = - 2 (κ^2 + μ)/κ . For κ→√(μ) we reproduce the result for the scalar dark soliton M = -4√(μ). Using the renormalized total momentum: P_s = i/2∫_-∞^∞(ψ_1 ∂_x ψ_1^* - ψ_1^* ∂_x ψ_1 + ψ_2 ∂_x ψ_2^* - ψ_2^* ∂_x ψ_2 ) dx - μΔϕ , where Δϕ = 2 α - π is the phase change between -∞ and +∞ in the dark (first) component, we obtain the same effective mass M = . d P_s/ dv|_v=0 = . ( dv/ dα)^-1 d P_s/ dα|_α=0 = - 2 (κ^2 + μ)/κ . This indicates that the motion of the dark-bright solitons in the Manakov case is indeed Newtonian and that we used the correct renormalization. In the non-Manakov (g_12≠ 1) case we `push' the soliton using the short and localized external impulse: V_1(x) = V_0 t(T - t) θ (t) θ (T-t) tanh(x)/cosh(x) , V_2(x) = 0 , obtain the velocity and moving soliton profile after a sufficiently long time, from which we calculate E_s and compute M = . d^2 E_s/dv^2|_v=0 and M = . d P_s/dv|_v=0 by fitting a quadratic function to E_s(v) and a linear function to P_s(v), respectively (figure <ref>). Henceforth, we shall use the mass obtained from the momentum, because it is more accurate. §.§ Wave in the dark component Let us consider a DB soliton with a wave in the first component with a wavenumber k_1 and a small amplitude a coming from the left (provided adequate conditions, discussed in the previous section, are met). The asymptotics are then: ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t(e^i k_1 x + r_1 e^-i k_1 x) + a N e^- i ω_1^- t(e^-i k_1 x + r_1^* e^i k_1 x) - √(μ) e^-i μ t, ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t t_1 e^i k_1 x + a N e^- i ω_i^- t t_1^* e^-i k_1 x + √(μ) e^-i μ t, ψ_2(x,t) a e^- i ω_2^+ t r_2^+ e^-i k_2^+ x + a e^- i ω_2^- t r_2^- e^-i k_2^- x, ψ_2(x,t) a e^- i ω_2^+t t_2^+ e^i k_2^+ x + a e^- i ω_2^- t t_2^- e^i k_2^- x, where N is the same as in Eq. (<ref>). Using an analogous approach as before, we derive that the force exerted on the soliton by such a wave is F = a^2 ( k_1^2 (1 + R_1 - T_1 ) + (k_2^+ )^2 (R_2^+- T_2^+ ) + (k_2^- )^2 (R_2^- - T_2^- ) ), where R_1 = |r_1 |^2, T_1 = |t_1 |^2, R_2^± = |r_2^±|^2 and T_2^± = |t_2^±|^2. The setup (<ref>) corresponds to the eigenwave (<ref>) with A = 0, let us call it the first eigenwave. The other eigenwave (B=0), i.e. (<ref>) with k_1 → - k_1 gives the same expression for the effective force, but with different values of reflection and transmission coefficients. Let us focus on the first eigenwave (note that the conditions for propagation to the right are derived above for the first eigenwave). Using a similar approach as for the scalar soliton, we can compute values of these coefficients and (using the effective mass computed above) compare the resulting acceleration with full PDE simulations (with initial field configurations constructed analogously as for the scalar dark soliton). It turns out that we always get the negative radiation pressure, described well by our linear model for relatively small amplitudes and frequencies (figures <ref>, <ref> and <ref>). The nonlinear behaviour for larger amplitudes is expected, since linear approach relies on the fact that the amplitude is small. However, the discrepancy for larger frequencies is surprising and requires further study. §.§ Wave in the bright component If we consider a DB soliton with a wave in the second component with a wavenumber k_2^± (and a small amplitude a) coming from the left, the asymptotics are ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t r_1 e^-i k_1 x + a N e^- i ω_1^- t r_1^* e^i k_1 x - √(μ) e^-i μ t , ψ_1(x,t) a N ( -k_1^2/2-ω̃/μ-1 ) e^- i ω_1^+ t t_1 e^i k_1 x + a N e^- i ω_i^- t t_1^* e^-i k_1 x + √(μ) e^-i μ t , ψ_2(x,t) a e^- i ω_2^± t e^i k_2^± x + a e^- i ω_2^+ t r_2^+ e^-i k_2^+ x + a e^- i ω_2^- t r_2^- e^-i k_2^- x , ψ_2(x,t) a e^- i ω_2^+t t_2^+ e^i k_2^+ x + a e^- i ω_2^- t t_2^- e^i k_2^- x , with N the same as in Eq. (<ref>). Similarly as before, we can derive the force exerted on the soliton: F = a^2 ( k_1^2 (R_1 - T_1 ) + (k_2^+ )^2 (R_2^+- T_2^+ ) + (k_2^- )^2 (R_2^- - T_2^- ) + (k_2^±)^2 ) , where R_1 = |r_1 |^2, T_1 = |t_1 |^2, R_2^± = |r_2^±|^2 and T_2^± = |t_2^±|^2. Again, we compute the values of the reflection and transmission coefficients numerically and compare the resulting acceleration with the full PDE simulations. In this case we observe the positive radiation pressure for all the values of parameters considered, and everything is described well by the linearized model provided the amplitude is small (figures <ref>, <ref> and <ref>). § CONCLUSIONS Using the Newtonian approximation and staying in the linear regime, we have successfully described the acceleration of the scatterer due to the action of the radiation pressure of a wave scattering on a dark and dark-bright soliton. This simple model agrees with numerical simulations for a wide range of parameters. We have shown that a collision of a scalar dark soliton with a wave in the second component of the condensate always results in a positive radiation pressure. For dark-bright solitons, however, we found that the radiation pressure is negative if the wave is incoming from the dark component, and positive otherwise. The mechanism responsible for NRP in this model relies on the fact that the soliton is present in both components. Otherwise, the equations separate, and from the conservation of energy it follows that the reflection and transmission coefficients sum to one. This implies that the force is always nonnegative, as we have seen explicitly for the scalar soliton case. If the soliton is a vector soliton, then the constraints from the energy conservation allow for both positive and negative sign of the force. Furthermore, the dispersion relations (i.e. the wavenumbers k_1 and k_2^±) play an important role in determining this sign (see table <ref>). The discrepancies between our effective linear model and the full PDE simulations are completely expected for the larger amplitudes of the incoming wave, since the linearization relies on it being small. However, the disagreement for large frequencies of the wave incoming from the dark component and hitting the dark-bright soliton is currently not well understood within the scope of this paper. This could be an opportunity for further research, especially combined with a detailed study of the nonlinear effects, which can play a role here. Another interesting possibility would be to investigate NRP on other solitons in a two-component BEC, such as bright-bright and dark-dark solitons. To the best of the authors' knowledge, the described setups can be, in principle, reproduced experimentally. Hopefully, in the future, this article could help to promote NRP from being a purely theoretical concept to an observable physical phenomenon. TR acklowledges the support of National Science Centre, grant number 2019/35/B/ST2/00059. DC and TR thank for the support of the Priority Research Area under the program Excellence Initiative – Research University at the Jagiellonian University in Kraków. The authors would like to express their gratitude to Adam Wojciechowski and Krzysztof Sacha for useful discussions, especially on the experimental applications of this research. § DERIVATION OF THE TOTAL ENERGY AND MOMENTUM Using the Lagrangian density (<ref>), one can derive the energy-momentum tensor T^μ_ν = ∑_i=1,2( ∂ℒ/∂ (∂_μψ_i)∂_νψ_i + ∂ℒ/∂ (∂_μψ_i^*)∂_νψ_i^* ) - ℒδ_ν^μ , where μ, ν = 0, 1, ∂_0 = ∂_t and ∂_1 = ∂_x. Since we consider CNLSE without the trapping potential, the Lagrangian density (<ref>) is invariant under translations, and then from the Noether theorem follows that such a tensor is a conserved current, meaning that it obeys ∑_μ=1,2∂_μT^μ_ν = 0 . The total energy and momentum are defined, respectively: E = ∫_-∞^∞T^0_0 dx, P = -∫_-∞^∞T^0_1 dx. Computing the energy-momentum tensor and integrating, we obtain their explicit form (<ref>) and (<ref>). Then, equations (<ref>) and (<ref>) follow from (<ref>). § NUMERICAL METHODS In all of the simulations of soliton dynamics in the full PDE we have used the second order split-step method <cit.>. This method requires periodic boundary conditions; therefore, at large x the dark soliton (and the dark part of the dark bright soliton) was `glued' with the antisoliton to achieve ψ_1 = -√(μ) at the right boundary. The spatial step was Δ x = 0.1, while the temporal step was in the range from Δ t = 0.0001 to Δ t = 0.0003, depending on a particular simulated configuration. We used x ∈ [-500, 1000] or x ∈ [-1000, 2000]. Linearized equations were solved using sparse matrices. The derivatives were discretized using the five-point stencil. The grid was x ∈ [-20, 20] with the step Δ x = 0.01.
http://arxiv.org/abs/2307.02198v2
20230705105040
ChiENN: Embracing Molecular Chirality with Graph Neural Networks
[ "Piotr Gaiński", "Michał Koziarski", "Jacek Tabor", "Marek Śmieja" ]
cs.LG
[ "cs.LG", "q-bio.QM" ]
: Embracing Molecular Chirality with Graph Neural Networks Piotr Gaiński et al. : Embracing Molecular Chirality with Graph Neural Networks Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland Mila - Quebec AI Institute, Montreal, Quebec, Canada Université de Montréal, Montreal, Quebec, Canada [email protected] : Embracing Molecular Chirality with Graph Neural Networks Piotr Gaiński1() Michał Koziarski2, 3 Jacek Tabor1 Marek Śmieja1 August 1, 2023 ==================================================================== Graph Neural Networks (GNNs) play a fundamental role in many deep learning problems, in particular in cheminformatics. However, typical GNNs cannot capture the concept of chirality, which means they do not distinguish between the 3D graph of a chemical compound and its mirror image (enantiomer). The ability to distinguish between enantiomers is important especially in drug discovery because enantiomers can have very distinct biochemical properties. In this paper, we propose a theoretically justified message-passing scheme, which makes GNNs sensitive to the order of node neighbors. We apply that general concept in the context of molecular chirality to construct Chiral Edge Neural Network (ChiENN) layer which can be appended to any GNN model to enable chirality-awareness. Our experiments show that adding ChiENN layers to a GNN outperforms current state-of-the-art methods in chiral-sensitive molecular property prediction tasks. § INTRODUCTION Recent advances in Graph Neural Networks (GNNs) have revolutionized cheminformatics and enabled learning the molecular representation directly from chemical structures <cit.>. GNNs are widely used in molecular property prediction <cit.>, synthesis prediction <cit.>, molecule generation <cit.>, or conformer generation <cit.>. Surprisingly, typical GNNs cannot capture the concept of chirality, roughly meaning they do not distinguish between a molecule and its mirror image, called enantiomer (see <ref>). Although enantiomers share many physical, chemical, and biological properties, they may behave remarkably differently when interacting with other chiral molecules, e.g. chiral proteins. For this reason, capturing chirality is critical in the context of drug design <cit.> and should not be ignored by the design of GNN architecture. A chiral molecule is a molecule with at least one chiral center which is usually a carbon atom with four non-equivalent constituents. The mirror image of a chiral molecule, called an enantiomer, cannot be superposed back to the original molecule by any combination of rotations, translations, and conformational changes (see <ref>). Therefore, enantiomers are molecules with different bond arrangements and the same graph connectivity. There are many examples of chiral drugs used in pharmacy whose enantiomers cause substantially different effects <cit.>. For instance (S)-penicillamine is an antiarthritic drug while its enantiomer (R)-penicillamine is extremely toxic <cit.>. Actually, chirality can be a characteristic of any class of graphs embedded in euclidean space (where we have an intuitive notion of reflection). For instance, <ref> shows two 2D road maps that are mirror images of each other and possess different properties. For this reason, modeling chirality in GNNs is not restricted to the chemical domain. r0.48 < g r a p h i c s > An illustration of a road map (left) and its mirror image (right). We see that the maps share the same connectivity between cities, however, to get from city A to city B one has to take the second exit on a roundabout D for the left map, and the first exit for the right map. In this paper, we propose and theoretically justify a novel order-sensitive message-passing scheme, which makes GNNs sensitive to chirality. In contrast to existing methods of embracing chirality, our framework is not domain specific and does not rely on arbitrary chiral tagging or torsion angles (see <ref>). The only inductive bias our method introduces to a GNN is the dependency on the orientation of the neighbors around a node, which lies at the core of chirality. The key component of the proposed framework is the message aggregation function. In a typical GNN, the messages incoming to a node from its neighbors are treated as a set and aggregated with a permutation-invariant function (sum, max, etc.). It makes the model unable to distinguish between chiral graphs with the same connectivity, but with different spatial arrangements. We re-invent this approach and introduce a message aggregation function that is sensitive to the spatial arrangement (order) of the neighbors. Our approach can be used in any chiral-sensitive graph domain where chirality can be expressed by an order of the neighboring nodes. We apply that general order-sensitive message-passing framework in the context of molecular chirality to construct Chiral Edge Neural Network (ChiENN) layer. The ChiENN layer can be appended to most molecular GNN models to enable chirality sensitiveness. Our experiments show that ChiENN can be successfully used within existing GNN models and as a standalone model consisting of stacked ChiENN layers. In both cases, ChiENN outperforms current state-of-the-art methods in chiral-sensitive molecular property prediction tasks by a large margin. We make our code publicly available[<https://github.com/gmum/ChiENN>]. Our contributions are as follows: * We propose and theoretically justify a general order-sensitive message-passing scheme. Our method can be adapted to any chiral-sensitive graph domain where chirality can be expressed by an order of the neighboring nodes (<ref>). * We use the proposed framework to construct a novel ChiENN layer that enables chirality awareness in any GNN model in the domain of molecular graphs (<ref>). The proposed ChiENN can be applied to any 3D graph task with the notion of chirality. * We evaluate and analyze the ChiENN layer and show that it outperforms current state-of-the-art methods in chiral-sensitive molecular property prediction tasks (<ref>). § RELATED WORK §.§.§ Explicit Tagging of Chiral Center. The most common approach for incorporating chirality into GNN is to use local or global chiral tags <cit.>. Both local and global tagging can be seen in the following way. Every carbon atom with four non-equivalent constituents, called a chiral center, is given a tag (CCW or CW) describing the orientation of its constituents. The orientation is defined using the enumeration of constituents computed by an arbitrary algorithm. The constituent with the highest number (4) is positioned so that it points away from the observer. The curve passing through the constituents with numbers 1, 2, and 3 respectively determines a clockwise (CW) or counterclockwise (CCW) orientation of the chiral center. Although enumeration algorithms for global and local tagging differ (the latter is not explicitly used in practice), the expressivity of both methods is limited, as we show in <ref>. §.§.§ 3D GNNs with torsion angles. Some recent GNN models enrich graphs with 3D information, like distances between atoms <cit.>, angles between bonds <cit.>, and torsion angles between two bonds joined by another bond <cit.>. As distances and angles are invariant to chirality, the torsion angles (that are negated upon reflection) are required for 3D GNN to express the chirality. However, even access to a complete set of torsion angles does not guarantee expressivity in chiral-sensitive tasks as shown in <cit.>. Torsion angles are sensitive to bond rotations and can also be negated by the reflection of a non-chiral molecule. In <cit.>, the authors proposed the ChIRo model that instead of embedding single torsion angles, embeds sets of torsion angles with a common bond. ChIRo is the current state-of-the-art method for chiral-sensitive tasks. In contrast to ChIRo, our proposed method does not incorporate distances, angles, or torsion angles. It only relies on the orientation of neighbors around a node, making it more general and easily adaptable to other chiral graph domains. Moreover, our experiments show that the ChiENN layer outperforms ChIRo by a large margin on chiral-sensitive molecular tasks (see <ref>). §.§.§ Changing Aggregation Scheme. The method most related to our approach is the Tetra-DMPNN model from <cit.> which replaces a classic message-passing scheme with a chiral-sensitive one. The proposed aggregation scheme is guided by local chiral tags, meaning that it relies on some arbitrary rules for enumerating neighbors and cannot be applied to nodes other than chiral centers. Moreover, the Tetra-DMPNN method is computationally expensive and does not scale with the number of possible neighbors of a chiral center, making the model useful only in the context of chemistry. Our approach provides a general, efficient, and scalable chiral-sensitive message passing and outperforms the Tetra-DMPNN on chiral-sensitive molecular tasks by a large margin (see <ref>). § ORDER-SENSITIVE MESSAGE-PASSING SCHEME §.§.§ Setting. Let us consider a directed graph G=(X, E) in which every node x_i ∈ X is represented by a N-dimensional encoding (x_i ∈^N). Edge e_ij connects nodes x_i and x_j and is represented by M-dimensional encoding (e_ij∈^M). In addition, we assume that for every node x ∈ X, we are given an order o of all its neighbors o=(x_0,x_1,…,x_d-1). The order of neighbors forms a sequence, which stands in contrast to typical graphs, where neighbors are treated as an unordered set. Given a permutation π on {0,1,…,d-1}, we assume that two orders o_1=(x_0,…,x_d-1) and o_2=(x_π(0),…,x_π(d-1)) are equivalent if and only if π is a shift i.e. π(i) = (i+k) d, for a fixed k ∈. In other words, the neighbors form the sequence on a ring. One of the most common mechanisms in GNN is message-passing, which updates the representation of a node x by the information coming from its neighbors (x_0,…,x_d-1), which can be written as: x'=f(x;x_0,…,x_d-1). In this paper, we are going to describe the general message-passing scheme, which is aware of the neighbors' order. Before that, we discuss possible choices of the aggregation function f. §.§.§ Vanilla message-passing as a permutation-invariant transformation. Let us first discuss a basic case, where f is a permutation-invariant function, i.e. f(x;x_0,…,x_d-1)=f(x;x_π(0),…,x_π(d-1)), for every permutation π of {0,1,…,d-1}. This aggregation ignores the order of neighbors and lies in a heart of typical GNNs. Let us recall that f is permutation-invariant with respect to {x_0,x_1,…,x_d-1} if and only if it can be decomposed in the form <cit.>: f(x_0,x_1,…,x_d-1) = ρ(∑_i=0^d-1ϕ(x_i)), for suitable transformations ϕ and ρ. In the context of graphs, a general form of a permutation-invariant aggregation of neighbors {x_0,x_1,…,x_d-1} of x is: x'=f(x;x_0,…,x_d-1)=ρ(x;∑_i=0^d-1ϕ(x;x_i)), for suitable transformations ϕ and ρ. By specifying ρ,ϕ as neural networks, we get the basic formula of vanilla message-passing. §.§.§ Shift-invariant aggregation. Vanilla message-passing relies on permutation-invariant aggregation and it does not take into account the neighbor's order. Thus we are going to discuss the weaker case of aggregation function f and assume that f is shift-invariant, i.e. f(x;x_0,…,x_d-1)=f(x;x_0+p,…,x_d-1+p), for any shift by a number p∈0,…,d-1, where the additions on indices are performed modulo d. This assumption is consistent with our initial requirement that shifted orders are equivalent. The following theorem gives a general formula for shift-invariant mappings. The function f is shift-invariant if and only if f can be written as: f(x_0,…,x_d-1) = ∑_p=0^d-1 g(x_0+p,…,x_d-1+p) for an arbitrary function g. If f is shift invariant, then f(x_0,…,x_d-1)=f(x_0+p,…,x_d-1+p) for every p, and consequently f(x_0,…,x_d-1) = ∑_p=0^d-11/df(x_0+p,…,x_d-1+p). On the other hand, if the function f can be written as ∑_p=0^d-1 g(x_0+p,…,x_d-1+p), then it is shift-invariant for arbitrary function g. Following the above theorem, we get a general formula for shift-invariant aggregation applicable to graphs: x'= ρ(x;∑_p=0^d-1ψ(x;x_0+p,…,x_d-1+p)), for suitable ρ and ψ, where all additions are performed modulo d. Now, we want to ensure that our function f is not only shift-invariant but also order-sensitive §.§.§ Order-sensitive message-passing. Let us assume that we are in the class of shift-invariant transformations. We are going to specify the formula (<ref>) to obtain ab aggregation, which is sensitive to any permutation other than shift. More precisely, we say that f is order-sensitive if and only if for every permutation π, we have: f(x;x_0,…,x_d-1) = f(x;x_π(0),…,x_π(d-1)) π(i) = (i + k) d. Let us investigate typical functions ψ in formula (<ref>), which can be implemented using neural networks. We start with the simplest case, where ψ is linear. Then ∑_p=0^d-1ψ(x_0+p,…,x_d-1+p) = ∑_p=0^d-1∑_i=0^d-1 w_i x_i+p = ∑_i=0^d-1w_i∑_p=0^d-1 x_i+p = ∑_i=0^d-1w_i∑_j=0^d-1 x_j, does not depend on the order of the neighbors (x_0, ..., x_d-1). To construct more complex functions, we use an arbitrary Multi-Layer Perceptron (MLP) as ψ. Since MLPs are universal approximators (for a sufficiently large number of hidden units), we can find such parameters θ that ∑_p=0^d-1ψ_θ(x_π(0)+p,…,x_π(d-1)+p) returns a different value for every permutation π that is not a shift. Therefore our aggregation scheme with ψ given by MLP can learn order-sensitive mapping. Following the above observations, we implement our order-sensitive message-passing using MLP as ψ. To match our construction to various numbers of neighbors in a graph, we restrict ψ to be k-ary (denoted as ψ^k) for some fixed k > 1 and overload it so that: ψ^k(x_0,…,x_d-1)= ψ^k(x_0,…,x_d-1,0,…,0)_k-d for d < k. . Given that, we implement the <ref> with the following neural network layer: x' = Wx + ∑_p=0^d-1ψ^k(x_0+p,...,x_k-1+p), ψ^k(x_0+p,...,x_k-1+p) = W_1σ (W_2(x_0+p | ... | x_k-1+p)). Our k-ary message function ψ^k is composed of concatenation operator | and two-layer MLP with ELU as σ. Intuitively, the output of ψ^k(x_0+p, ..., x_k-1+p) can be seen as a message obtained jointly from k consecutive neighbors starting from a neighbor p in order (x_0, ..., x_d-1) which is illustrated in <ref>. § CHIENN: CHIRAL-AWARE NEURAL NETWORK In this section, we apply the order-sensitive message-passing framework to molecular graphs. We show that order-sensitive aggregation is a key factor for embracing molecular chirality. Roughly speaking, in contrast to vanilla message-passing, the proposed (Chiral-aware Edge Neural Network) is able to distinguish enantiomers, where one molecule is a mirror image of the second. Although we evaluate the ChiENN model in the context of molecular property prediction, the proposed model can be applied to any 3D graph task with the notion of chirality. To construct based on our order-sensitive message-passing scheme from <ref>, we need to define a notion of neighbors' order in molecular graphs that grasps the concept of chirality (see <ref>). We introduce this notion of order for edge (dual) molecular graphs and provide a simple transformation from standard molecular graphs to edge molecular graphs. Therefore the rest of the section is organized into three subsections: * Edge Graph describing the transformation from a molecular graph to its edge (dual) form used in our ChiENN model, * Neighbors Order defining the order of the neighbors in an edge graph, * Chiral-Aware Update constructing order-sensitive update rule using our order-sensitive framework from the <ref>. §.§ Edge Graph Let us suppose, we have a directed graph G=(X, C, E) that represents a concrete conformation (3D embedding) of a molecule. The node encoding x_i ∈ X corresponds to an i-th atom from a molecule, c_i ∈ C ⊆^3 are its coordinates in 3D space, and the edge encoding e_ij∈ E represents a bond between i-th and j-th atoms. To make the definition of neighbor order straightforward, our ChiENN model operates on an edge (dual) graph G'=(X', C', E') which swaps nodes with edges from the original graph G. It means that the node x_ij∈ X' represents the edge e_ij∈ E, while the edge e_ij, jk∈ E' represents the node x_j that connects edge e_ij∈ E with e_jk∈ E. Similarly, c'_ij∈^3 ×^3 is now a 3D coordinate vector that links positions c_i and c_j. Formally, we have: X' = {x_ij=e_ij : e_ij∈ E }, C' = {c_ij=c_i | c_j : c_i, c_j ∈ C, e_ij∈ E }, E' ={e_ij, jk=e_ij | x_j | e_jk: e_ij, e_jk∈ E, x_j ∈ X }, where | stands for a concatenation operator. Clearly, the constructed edge graph G'=(X', C', E') can be fed to any GNN that can take as an input the original graph G=(X, C, E). §.§ Neighbors Order In an edge molecular graph G=(X, C, E), a node x_jk∈ E represents a directed bond from atom j to atom k in the original molecule. It is assigned with a 3D vector c_jk∈ C ⊆^3 ×^3 spanned from atom j to atom k. Therefore, we will sometimes refer to nodes as if they were 3D vectors. Let us consider the node x_jk and the set of its incoming neighbors: N(x_jk)={x_i_1j, x_i_2j, ..., x_i_dj}. By construction of G, every node x_jk has a corresponding parallel node x_kj. For simplicity, we will treat this parallel node separately and exclude it from the set of neighbors, i.e. x_kj∉ N(x_jk). The construction of the neighbors N(x_jk) order is illustrated in <ref> and consists of two steps: * Transformation: first, we perform a sequence of 3D transformations on x_jk and N(x_jk) to make x_jk anchored to coordinate origin, perpendicular to yz plane and pointed away from the observer (see <ref> b)). * Sorting: second, we project the transformed neighbors N(x_jk) to the yz plane and sort the projections by the angle to the y axis. Details of the above construction are presented in the supplementary materials. Two observations can be made regarding the above construction: The above construction returns non-equivalent orders for a chiral center and its mirror image. Any SE(3) transformation of a molecule coordinates C and any internal rotation of its bonds (conformation) can only change the shift of the order o, resulting in equivalent order o'. Therefore, the above construction grasps the notion of chirality in a molecule and is additionally SE(3)- and conformation-invariant. We artificially excluded x_kj from a set of x_jk neighbors, because its parallel to x_jk and therefore its angle to y axis after the sequence of transformations is undefined. In theory, another neighbor x_ij can also be parallel to x_jk and should also be excluded from the neighbor set, but we have not observed such a case in our experiments and decided not to take it into account. §.§ Chiral-Aware Update Once we transformed a molecular graph to edge (dual) molecular graph G=(X, E, C) using transformation from <ref> and assigned every node x_jk with an order of its neighbors (x_1, ..., x_d) using construction from <ref>, we can define the order-sensitive update rule of our ChiENN model: x_jk' = W_1x_jk + W_2x_kj + ∑_p=0^d-1ψ^k(x_0+p,...,x_k-1+p), ψ^k(x_0+p,...,x_k-1+p) = W_3σ (W_4(x_0+p | ... | x_k-1+p)), where ψ^k is k-ary message function and σ is ELU non-linear activation. The update rule is almost the same as that from <ref>, but here we add a term that explicitly embeds x_kj node, which was artificially excluded from the order of the x_jk neighbors. § EXPERIMENTS We compare ChiENN with several state-of-the-art models on a variety of chiral-sensitive tasks. Details of experiments are described in Section <ref>, while the results can be found in Section <ref>. Furthermore, to validate design choices behind ChiENN we also conducted an ablation study, presented in Section <ref>. §.§ Set-up §.§.§ Datasets. We conduct our experiments on five different datasets affected by molecule chirality. First, two datasets proposed in <cit.> which are designed specifically to evaluate the capability of a model to express chirality: classification of tetrahedral chiral centers as R/S (which should be a necessary, but not sufficient, condition to learn meaningful representations of chiral molecules); and enantiomer ranking, in which pairs of enantiomers with enantioselective docking scores were selected, and the task was to predict which molecule of the pair had a lower binding affinity in a chiral protein pocket. Second, the binding affinity dataset, which is an extension of the previously described enantiomer ranking, with the same underlying molecules, but the task being regression of the binding affinity. Additionally, we take two datasets from the MoleculeNet benchmark <cit.> that do not explicitly require prediction of molecule chirality, but contain some percentage of molecules with chiral centers, and the underlying biological task in principle might be chirality-dependant: BACE, a binary classification dataset for prediction of binding results for a set of inhibitors of human β-secretase 1 (BACE-1) <cit.>; and Tox21, a multilabel classification dataset containing qualitative toxicity measurements on 12 different targets, including nuclear receptors and stress response pathways. §.§.§ Reference methods. As reference models we consider several state-of-the-art neural network architectures for processing graphs, both chirality-aware and general: GPS <cit.>, SAN <cit.>, DMPNN <cit.>, ChIRo <cit.>, and Tetra-DMPNN <cit.>. For models not designed to process chirality, that is DMPNN, GPS, and SAN, we additionally considered their variants with chiral atom tags included in the node features, similar to <cit.>. For the proposed approach we consider both a pure model obtained by stacking several ChiENN layers, as well as combining ChiENN layers with other architectures (ChiENN+GPS and ChiENN+SAN). §.§.§ Training details. All models were trained using Adam optimizer for up to 100 epochs, with a cosine learning rate scheduler with 10 warm-up epochs and gradient norm clipping, following the set-up of <cit.>. Cross-entropy and L1 loss functions were used for classification and regression, respectively. Note that in contrast to <cit.>, to keep the set-up consistent across models we did not use triplet margin loss for ChIRo, and observed worse results than reported in <cit.>. We also performed a grid search with the identical budget (see <ref>). For all datasets and models, we reported results averaged from three runs. For enantiomer ranking, binding affinity, and R/S, we used data splits provided by <cit.> and for BACE and Tox21, we used random splits with a train-valid-test ratio of 7:1:2. For each model and dataset, we report mean results from 3 independent runs with the best parameters picked by grid search. §.§.§ Evaluation. Note that for the binding rank task we used accuracy modified with respect to <cit.>. We required the difference between the predicted affinity of two enantiomers to be higher than the threshold of 0.001. This led to ranking accuracy being equal to 0 for models unable to distinguish chiral molecules. §.§ Comparison with Reference Methods In this section, we compare ChiENN-based networks with state-of-the-art reference architectures using the experimental setting described in <ref>. §.§.§ Chiral-sensitive tasks. The results on chiral-sensitive tasks are presented in Table <ref>. For both the enantiomer ranking and binding affinity, ChiENN-based approaches achieved the best results, producing a significant improvement in performance over the state-of-the-art chiral-aware architectures, that is ChIRo and Tetra-DMPNN. For both GPS and SAN, there was a significant improvement in performance due to the addition of ChiENN layers when compared to chiral tag inclusion. It demonstates that ChiENN model can enable chiral-awareness demonstrating the general usefulness of the proposed layer, and the fact that it can be combined with a model preferred in a given task. Finally, as expected, all of the chirality-aware methods can properly distinguish chiral centers in the R/S task, while the baselines that do not capture the concept of chirality (DMPNN, GPS and SAN) cannot. Note that for this task, we omitted the results for models with chiral tags encoded in node features, for which the task is trivial. §.§.§ Remaining tasks. The results on BACE and Tox21 tasks are in Table <ref>. We see that the ChiENN model achieves results comparable to state-of-the-art models, however the influence of chirality-sensitiveness on these tasks is not clear. For SAN we actually observed a slight drop in performance when using ChiENN layers, and for GPS the results remained roughly the same. The possible explanations for that might be either 1) lack of importance of chirality on predicted tasks, or 2) small dataset size, leading to overfitting in presence of chiral information. Our conclusion is that ChiENN layers significantly improve the performance in chiral-sensitive tasks, and produce comparable results in the other tasks, where the influence of chirality is not clear. We believe that further investigation on the influence of chirality on the tasks commonly used in the molecular property prediction domain would be beneficial and we leave it for future work. §.§ Ablation Studies §.§.§ Comparison of k-ariness of the message function. We began with an analysis of the impact of k-ariness (Equation <ref>) of the message function used by ChiENN. Specifically, in this experiment, we used the pure variant of ChiENN, which is a graph neural network using ChiENN layers as message-passing layers. We varied k ∈{1, 2, 3}, where k = 1 disables the ability of the network to distinguish enantiomers as it collapses our order-sensitive message passing scheme from <ref> to vanilla message-passing from <ref>. We considered values of k up to 3 since it corresponds to the airiness of standard chiral centers observed in the edge graphs (see <ref>) of molecules. The results are presented in Table <ref>. As expected, choosing k = 1 leads to a failure in distinguishing enantiomers (makes message passing permutation invariant), as demonstrated by minimum performance in R/S and enantiomer ranking tasks. Interestingly, for most datasets choosing k = 2 was sufficient, leading to a comparable performance to k = 3. The only exception to that was BACE dataset, for which a noticeable drop in performance was observed when using k = 2. We used k = 3 in the remainder of this paper. §.§.§ Using ChiENN layer with existing models. Secondly, we conducted an ablation of different design choices that can be made to enable enantiomer recognition within the existing architectures. Specifically, we focused on the GPS model and considered using three different strategies: conversion to edge graph proposed in this paper, the inclusion of chiral tags in the node features of the graph, and finally, replacement of message passing layers with ChiENN layers. The results are presented in Table <ref>. Several observations can be made: first of all, in the case of R/S task, we can see that both using chiral tags and the ChiENN layers allows us to properly recognize chiral centers (and as stated before, due to the simplicity of the task, good performance here is a necessary, but not sufficient, requirement for learning meaningful chiral representations). Secondly, using ChiENN layers significantly improves the performance in the enantiomer ranking (explicitly requiring chirality) and binding affinity (implicitly requiring it) tasks, more than simply including chiral tags. Interestingly, combining chiral tags with edge graph transformation improves the performance compared to using the tags alone (though not as much as using ChiENN layers), suggesting that it might be a feasible general strategy. Finally, the results on two remaining tasks, that is BACE and Tox21, for which the impact of chirality is unclear, are less straightforward: in the case of BACE, GPS with edge graph transformation achieves the best performance, and in the case of Tox21, using both the edge graph transformation and including the chiral tags. However, we can conclude that using ChiENN layers outperforms simply including chiral tags in tasks requiring chirality, and have comparable performance to the baseline GPS in other tasks. § CONCLUSIONS In this paper, we proposed and theoretically justify a general order-sensitive message-passing scheme that can be applied to any chiral-sensitive graph domain where chirality can be expressed by an order of the neighboring nodes. We used the proposed framework to construct a novel ChiENN layer that enables chirality awareness in any GNN model in the domain of molecular graphs, where chirality plays an important role as it can strongly alter the biochemical properties of molecules. Our experiments showed that the ChiENN layer allows to outperform the current state-of-the-art methods in chiral-sensitive molecular property prediction tasks. § ACKNOWLEDGEMENTS The research of J. Tabor was supported by the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund in the POIR.04.04.00-00-14DE/18-00 project carried out within the Team-Net program. The research of P. Gaiński and M. Śmieja was supported by the National Science Centre (Poland), grant no. 2022/45/B/ST6/01117. For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. § ETHICAL STATEMENT As we consider our work to be fundamental research, there are no direct ethical risks or societal consequences; these have to be analyzed per concrete application. splncs04 § NEIGHBOR ORDER CONSTRUCTION In this section, we describe Transformation and Sorting steps of order construction from <ref>. §.§ Transformation We perform a sequence of 3D transformations on c_jk and c_i_1j, ..., c_i_dj∈R^3 ×R^3 to make c_jk anchored to coordinate origin, perpendicular to yz plane and pointed away from the observer. To simplify the notation, we will perform transformation on coordinates c_i ∈R^3 that were used in the definition of the 6D coordinates c_ij=c_i | c_j in <ref>. We define function to calculate angle between 2D vectors as angle(a, b)=arccos (ab^T/|a||b|). * We first transpose every point with t(c) = c - c_j, so that c_jk is anchored to the coordinate origin, so c_i=t(c_i), * We calculate α_x=angle([(c_j)_y,(c_j)_z], [0, 1]) and matrix W_x representing the rotation along x axis by α_x angle. We rotate the points, so c_i=W_xc_i, * We calculate α_y=angle([(c_j)_x,(c_j)_z], [1, 0]) and matrix W_y representing the rotation along y axis by α_x angle. We rotate the points, so c_i=W_yc_i. §.§ Sorting After applying the transformation described in the previous versions, we can sort the 3D coordinates c_i_1,...,c_i_d by the angle α_i=angle([1, 0], [(c_i)_y, (c_i)_z]) between the y-axis and their projections on yz axis. Therefore, we obtain an order (x_i_π(1)k, ..., x_i_π(d)k) such that α_i_π(l)≤α_i_π(l+1). § EXPERIMENTAL DETAILS §.§ Hyperparameter grids We performed a grid search of parameters with identical budget: for all models, with learning rate ∈{1e-3, 1e-4, 1e-5} and dropout ∈{0, 0.2, 0.5}, and with model-dependent number of layers and layer dimensionality, chosen based on the parameters from corresponding papers: for DMPNN and Tetra-DMPNN, with layers ∈{2, 4, 6} and dimensionality ∈{300, 600, 900}; for ChIRo, with layers ∈{2, 3, 4} and dimensionality ∈{64, 128, 256}; and for GPS, SAN and ChiENN, with layers ∈{3, 6, 10} and dimensionality ∈{64, 128, 256}. We restricted the grid search to a subset of a dataset of size at most 10000 molecules. As the computational costs of Tetra-DMPNN are high, for binding affinity, we took the optimal hyperparameters for DMPNN+tags as parameters for Tetra-DMPNN. We did similarly for SAN+ChiENN for R/S and binding rank and took the corresponding optimal hyperparameters from SAN.
http://arxiv.org/abs/2307.02963v1
20230706130023
Palatini $F(R,X)$: a new framework for inflationary attractors
[ "Christian Dioguardi", "Antonio Racioppi" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-ph" ]
http://arxiv.org/abs/2307.00654v1
20230702200909
Looks Can Be Deceiving: Linking User-Item Interactions and User's Propensity Towards Multi-Objective Recommendations
[ "Patrik Dokoupil", "Ladislav Peska", "Ludovico Boratto" ]
cs.IR
[ "cs.IR" ]
Linking Interactions and User's Propensity Towards Multi-Objective Recommendations]Looks Can Be Deceiving: Linking User-Item Interactions and User's Propensity Towards Multi-Objective Recommendations [email protected] 0000-0002-1423-628X Faculty of Mathematics and Physics, Charles University, Prague Czechia [email protected] 0000-0001-8082-4509 Faculty of Mathematics and Physics, Charles University, Prague Czechia [email protected] 0000-0002-6053-3015 University of Cagliari Italy Multi-objective recommender systems (MORS) provide suggestions to users according to multiple (and possibly conflicting) goals. When a system optimizes its results at the individual-user level, it tailors them on a user's propensity towards the different objectives. Hence, the capability to understand users' fine-grained needs towards each goal is crucial. In this paper, we present the results of a user study in which we monitored the way users interacted with recommended items, as well as their self-proclaimed propensities towards relevance, novelty and diversity objectives. The study was divided into several sessions, where users evaluated recommendation lists originating from a relevance-only single-objective baseline as well as MORS. We show that despite MORS-based recommendations attracted less selections, its presence in the early sessions is crucial for users' satisfaction in the later stages. Surprisingly, the self-proclaimed willingness of users to interact with novel and diverse items is not always reflected in the recommendations they accept. Post-study questionnaires provide insights on how to deal with this matter, suggesting that MORS-based results should be accompanied by elements that allow users to understand the recommendations, so as to facilitate their acceptance. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems [ Ludovico Boratto August 1, 2023 ==================== § INTRODUCTION Motivation and context. Beyond-accuracy objectives are gaining more and more attention in Recommender Systems (RSs). Indeed, it is now paramount to pair recommendation effectiveness with properties that account for user perspectives (such as novelty and diversity <cit.> or consumer fairness <cit.>), or that are aligned with the recommended items (such as behavioral biases or provider fairness <cit.>). Multi-objective recommender systems (MORS) support this paradigm by generating results that account for multiple properties <cit.>. MORS can account for multiple objectives at the aggregate level, by balancing these objectives over the entire user base (e.g., the system is capable of offering a certain level of diversity), or at the individual level, by matching the beyond-accuracy needs of each user in a different way (e.g., the recommendations of a user might be more diverse those of another) <cit.>. MORS that operate at the individual level have optimized the recommendation process mainly via online interactions, such as conversational approaches <cit.> or via critiquing <cit.>, but approaches aiming at learning individual propensities from past interactions also exist, e.g., <cit.>. Open issues. Even though MORS that operate at the individual level have as a goal the optimization of the needs of each user, their functioning and evaluation either requires continuous interaction with the users or is based on offline data without any feedback from the users. Having these two extremes as the only options leads to two main questions that so far remain unanswered. At the RS functioning level, we need to understand how to incorporate the propensity of users towards certain beyond-accuracy properties into the recommendation process. This is not possible in offline approaches, while online ones work until a recommendation is accepted (i.e., the conversation or the critiques stop appearing). At the evaluation level, we do not know to what extent the recommendations accepted by the users are driven by these beyond-accuracy goals. Hence, understanding directly from the users their propensity towards beyond-accuracy goals and how they should be reflected in the recommendations is a key open problem for the functioning of MORS that operate at the individual level. Our contributions. To address the aforementioned issues, we present the results of a user study aimed at linking the self-proclaimed propensity of users towards relevance, novelty, and diversity criteria with their actual acceptance of provided recommendations. In particular, we asked users to iterate through several recommending sessions in the Movie domain. We confronted them with results of a relevance-based single-objective RS and two MORS variants balancing relevance, novelty, and diversity criteria. We further allowed them to tune MORS by defining their propensity towards the aforementioned criteria. Therefore, for the first time in the literature, we can link (i) the propensity of the users to interact with items characterized by certain beyond-accuracy properties, with (ii) their propensity to accept recommendations offering these same properties. Our results provide interesting insights into how users' propensity towards beyond-accuracy goals can be reflected in the individual-level MORS. Indeed, despite the users' self-declared propensity towards multi-objective goals, single-objective recommendations attracted more selections than those generated by MORS. In the evaluation, we argue that the presence of MORS recommendations (and selections) is crucial for long-term user satisfaction. We also discovered that users' selection behavior exhibits interesting deviations from the distributions induced by displayed items (impressions) and propensities towards individual objectives (weights). Indeed, in the case of single-objective RS, users on average selected items with lower estimated relevance scores than the average of recommended items. Likewise, in the case of MORS, users selected less diverse and novel items with higher estimated relevance than what was the average of those metrics w.r.t. the recommended items. This propensity towards relevant items in MORS-based recommendations happened regardless of the fact that the users could manually fine-tune the level of novelty and diversity. We briefly analyze the possible causes of these phenomena and suggest plausible mitigation strategies. § USER STUDY DESIGN The study was conducted online[Link removed for the sake of anonymization, but it will be updated in case of acceptance.] and consisted of the following steps: informed consent and basic demographics, preference elicitation, recommendation sessions (8x), and a post-study questionnaire. Dataset and pre-processing. The study was conducted on top of the MovieLens-Latest dataset <cit.>, which was selected for its relative novelty and high familiarity with the movie domain. The dataset was utilized in two ways: to populate collaborative filtering algorithms and as a starting point to gather item metadata. In order to comply with the gathered user selections, the feedback was binarized. Furthermore, to only focus on the relevant portion of the dataset, we filtered out movies released before 1990, ratings older than 2010, movies that have less than 50 ratings per year, users with less than 100 ratings, and movies without ratings. This resulted in 9K users, 2K movies, and 1.5M ratings. In order to properly visualize the items, additional metadata were collected from respective IMDb profiles: movie descriptions, posters, and links to movie trailers. Recommender systems. In the study, three RSs variants were evaluated (one single-objective and two multi-objective), denoted as Beta, Gamma, and Delta. Beta (single-objective baseline) follows a generalized matrix factorization  <cit.> example from tf.recommenders[<https://www.tensorflow.org/recommenders/examples/basic_retrieval>]. We used the embedding size of 32 and 5 training epochs. Gamma and Delta utilized the predictions of Beta as its relevance component, but additionally incorporated also diversity and novelty viewpoints. In particular, Delta utilized RLProp algorithm <cit.> and Gamma utilized incremental weighted average <cit.>[I.e., recommended items were selected one by one, while their marginal gains were iteratively updated.] Both algorithms were parameterized by the user's propensity towards individual objectives (described in Sec. <ref>). Beta algorithm was first trained on the MovieLens dataset and then fine-tuned separately (i.e., each study participant received their own private copy of the algorithm). Fine-tuning was done after the preference elicitation step as well as after each recommending session. Note that since Beta algorithm was utilized as a source in both Gamma and Delta, the feedback received on all recommended items was utilized for Beta fine-tuning. Also note that to enhance engagement and coverage, we prohibited repeated recommendations of items that were previously shown to the user. §.§ Study flow In the initial phase, users received a description of the study and were asked for basic demographics (e.g., gender, age, education) as well as to provide informed consent on the study procedure and publication of anonymized results. In the preference elicitation phase, participants were asked to select previously known and liked movies out of a randomized list. Depicted movies were sampled on the basis of three objective criteria: overall relevance, novelty, and diversity. For each criterion, we constructed bins of movies with high and low values, and from each bin, we randomly sampled four movies. This procedure aimed on minimizing the historical biases present in the source data. Note that users were allowed to load more movies (based on the same procedure) as well as search for a specific movie manually. During each of eight recommendation sessions, the results of two RS were shown to the user. Each time we depict an output of the single-objective RS (Beta) accompanied by one of the MORS (either Gamma or Delta). Recommendation lists were kept separated, and displayed at randomized positions. The procedure for choosing the MORS variant was as follows. Before the first session, either Delta or Gamma RS was selected at random. This algorithm is then used in the first four sessions, while in the last four sessions, we switch to the other MORS variant. As such, algorithm-specific sequence-aware patterns can be observed and the usage MORS variant can be considered as within-subject variable.[Merely the ordering of MORS variants is a between-subject variable.] At each recommending session, we asked study participants to provide both implicit feedback (i.e., select items that they would consider watching tonight) and to provide explicit feedback (i.e., rate the overall performance of depicted RSs on a one-to-five stars scale). After completing the feedback phase, participants were also allowed to modify their propensity (i.e., weights) towards individual objective criteria. This was conducted via a slider depicting the current values for each objective and forcing it to maintain a unit sum of all objectives. Finally, in the post-study questionnaire, we asked participants to fill in responses (on a 5-point Likert scale) to a series of questions regarding both the general performance of RS as well as questions specifically targeting the GUI for changing objective weights. Questions were inspired by the ResQue framework <cit.>, but extended to also cover the specifics of the GUI for criteria propensity setting. The questionnaire also contained several attention checks to remove unreliable participants. §.§ Considered objectives and their importance weights Both the Gamma and Delta RSs aim to incrementally construct the list of recommendations w.r.t. several objective criteria. In particular, they utilize the normalized marginal gains (NMG) individual items provide in terms of these objectives. In this paper, we focused on relevance, novelty, and diversity, defined as follows. For relevance, we considered the sum of estimated relevance scores (predicted by Beta algorithm) as an objective, so the marginal gain of each item was its own relevance score: MG_i,rel = r̂_u,i. Normalization is then applied as empirical cumulative distribution function (CDF) w.r.t. all items' marginal gains (see <cit.> for more details). Similarly, marginal gain w.r.t. novelty was defined as item's mean popularity complement <cit.>: MG_i,nov = -1* |u ∈ U: r_u,i exists| / |U|, where r_u,i is feedback of user u on item i and U is the set of all users. Marginal gain w.r.t. diversity is defined as the mean collaborative distance of the item to the list of already selected recommendations:[I.e., the diversity objective corresponds to the incremental collaborative intra-list diversity, ILD <cit.>.] MG_i,div = 1/|L|∑_∀ j ∈ L d(i,j), where L is a list of already selected recommendations and d(i,j) is a distance metric – cosine distance on items' ratings in our case. Both the Gamma and Delta algorithms used the propensity weights assigned to individual objectives. These were iteratively modified by the users after each session, but their initial values had to be trained based on the data from preference elicitation. We used a similar procedure to <cit.>. In particular, we calculated the normalized marginal gains (NMG) for each objective and each selected movie. Note that because the user's profile was not established yet, relevance gain was calculated as the mean estimated relevance of the selected items w.r.t. all train set users. Diversity gain was calculated as the mean distance of the selected items from all the displayed ones, and novelty gain remains unchanged. Gains of all selected movies were normalized via CDF defined on the population of all displayed movies. Final estimated propensities were obtained as the mean of all items' NMGs and linearly scaled to unit sum. § STUDY RESULTS The study was conducted in April 2023. In total, 120 participants were recruited using the <Prolific.co> service. Participants were pre-screened for fluent English, no less than 10 previous submissions, and 99% approval rate. Twelve users did not finish the study and, in addition, we rejected 2 participants due to failed attention checks, which resulted in 106 completed participations. Table <ref> depicts the overall study results. It can be seen that Beta (relevance-only baseline) significantly outperformed both MORS variants w.r.t. total volume of selections as well as mean algorithm ratings. Out of the two MORS variants, Delta obtained significantly more selections (Fisher's exact test p-value: 2.6e-15) as well as significantly higher average ratings than Gamma (T-test p-value: 8.6e-6). Note that both implicit and explicit feedback modalities were correlated, but there were some discrepancies (Pearson's correlation: 0.63). We also checked several other statistics with rather expectable results: The volume of selections slightly drops for subsequent sessions (up to 28% drop), top-ranked items were selected more often than lower-ranked (up to 33% drop), etc. Some additional details are available from <https://bit.ly/looks-can-be-deceiving>. Based on these initial results, we formulated the following questions: * RQ1. Are there some qualities, in which MORS recommendations improve over the single-objective baseline? If so, what is their long-term impact on user satisfaction? * RQ2. What are the possible causes of the inferior performance of MORS? Could this be somehow mitigated? §.§ Beyond-accuracy objectives and their long-term impact In order to answer RQ1, we focused on the beyond-accuracy criteria of recommended (impressions), but also selected items. A natural choice to start with are the normalized marginal gains of considered objectives. By inspecting Table <ref>, one can observe that both Gamma and Delta clearly outperformed Beta in terms of NMG_nov and NMG_div for impressions as well as selections. The increased impression-level novelty and diversity is a direct consequence of recommendations construction, while the selection-level increase indicates that users (to some extent) followed the distribution of recommended items. Obtained results also corresponded to other novelty and diversity metrics: collaborative ILD (0.876 for Beta vs. 0.988 for Gamma vs. 0.961 for Delta), content-based (genre-based) ILD (0.323 vs. 0.385 vs. 0.369), genre coverage (0.484 vs. 0.482 vs. 0.496), mean popularity complement (-0.027 vs. -0.004 vs. -0.011), temporal novelty (0.647 vs. 0.932 vs. 0.856). Significant differences were also obtained on selections: collaborative ILD (0.868 vs. 0.964 vs. 0.946), mean popularity complement (-0.035 vs. -0.008 vs. -0.010), and temporal novelty (0.625 vs. 0.896 vs. 0.859). However, although the above-mentioned results are interesting, it is yet to be shown whether they have a practical impact. To do so, we focused on (i) the long-term user satisfaction as a function of users' acceptance of MORS recommendations in early sessions, and (ii) the impact of MORS-based selections on training single-objective RS. Impact of MORS acceptance on long-term user satisfaction. Let us (optimistically) assume that eight recommendation sessions constitute a sufficient base for a long-term evaluation. Our analysis is based on dividing the sessions into early (i.e., head) and late (i.e., tail). Then, we measure whether the adoption of MORS recommendations in the head had a measurable impact on user satisfaction in the tail. In particular, we considered the size of the head to be one to four first sessions and defined three metrics (w.r.t. head) to describe a user's MORS adoption: the volume of selections on single-objective RS, the volume of selections on multi-objective RS, and the ratio of multi-objective selection on all selections (in the results; we denote them as #SORS, #MORS, MORS ratio respectively). For each metric, we divided users into two groups: users with above-median values and with values below-or-equal to the median (denoted as High and Low clusters). In order to get finer-grained results, we also applied a pre-processing of users to separately consider those who had high or low total volumes of selections in the head segment (denoted as high-selections, low-selections, and all users for no pre-processing). In the tail section, we considered the total volume of selections (#selections) and the mean rating of the provided recommendation lists. Table <ref> shows the results of the long-term impact evaluation. The main outcomes are as follows. For #MORS, the High cluster exhibited better values of both #selections and mean ratings. However, the inherent flaw is that the same trend appeared in the head section as well[I.e., users who made more MORS selections also made more selections and provided higher ratings in general.]. This tendency is maintained throughout the study, seemingly without major fluctuations. Therefore, we assume that the high #MORS cluster (w.r.t. head) merely identifies a cluster of more overall engaged users. In contrast, High and Low clusters w.r.t. MORS ratio exhibited much more similar performance in the head section (at least for shorter heads – see further). In the tail sections, users of the High cluster provided on average significantly higher ratings than the users of the Low cluster. To our surprise, the impact on the volume of the selections was much smaller, often insignificant, or even negative. That is, despite selecting similar (or even lower) volumes of items, users of the High cluster were in general more satisfied with provided recommendations. Note that the impact is incremental and rather fast. While for the head sizes of one and two, there is no substantial difference in the performance of both clusters w.r.t. head, this gradually changes and already for the head size of four, this became noticeable. Interesting results were also obtained for #SORS. While the high cluster almost always exhibited a higher volume of all selections in the tail, the improvements w.r.t. mean ratings were smaller, mostly insignificant, or even negative. This was despite the fact that quite often, high user clusters were associated with higher mean ratings in the head section. We read these results in such a way that, despite being satisfied in the early stages, users who mostly adopted single-objective recommendations struggle to find sufficiently interesting/satisfying recommendations in the later stages. This is despite the fact that there are many “somewhat relevant” items (thus the higher volume of selections). As for the pre-processing variants, the general trend was similar for both sub-groups, but the differences were more pronounced on high-selection cluster of users. We assume that this is a natural consequence of the fact that more data is being supplied for fine-tuning. Overall, we may conclude that early adoption of MORS-based recommendations led to higher satisfaction later on. This is further corroborated by the questionnaire analysis, revealing that the users of high cluster w.r.t. MORS ratio provided more positive answers on the questions “Recommended items were novel to me” (significant for all head sizes), “Recommended items were diverse” (significant for head sizes of three and four), and “Recommended items matched my interests” (significant for low-selection users and head sizes of three and four). Impact of MORS selections on the fine-tuning of single-objective RS. In this analysis, we aimed on discovering to what extent was it beneficial to fine-tune single-objective RS with the help of MORS-based selections. To do so, we simulated the behavior of Beta, should it be trained only w.r.t. selections made on single-objective recommendations.[The procedure was incremental, i.e., in each session, we only considered those selections, for which the re-trained Beta provided an impression.] First, note that while the recommendations of single-objective-trained Beta (SOT-Beta) gradually departed from the original Beta, the intersection remained substantial (decreasing from 80% in the second session to 63% in the last session). This makes the whole procedure feasible, although we can expect that due to the lower volume of impressions, obtained results could somewhat underestimate the true performance of SOT-Beta. Now, let us observe the beyond-accuracy statistics of both Beta variants (see Figure <ref>). SOT-Beta exhibited lower collaborative ILD (0.864 vs 0.876; T-test p-value:3.7e-8) w.r.t. impressions. More importantly, this also translated into the inferior ILD w.r.t. selections – that is, when comparing the ILD of all selections of original Beta recommendations with those recommended SOT-Beta (0.857 vs 0.869, p-value: 0.028). Similar observations can be made also for impression-based content-based ILD, genre coverage, mean popularity complement, and temporal novelty. Nonetheless, in these cases, selection-level statistics did not differ significantly. Notably, the existence of MORS-based selections considerably improved the beyond-accuracy properties of the single-objective RS, which might have improved its adoption by study participants. §.§ Comparing user's selections with user-defined propensities towards beyond-accuracy objectives As can be observed from Table <ref>, users' selections did not exactly follow the distribution of the impressions. Selections made on Beta recommender exhibited significantly lower NMG_rel than corresponding impressions (T-test p-value: 4.4e-5). Also, selections made on both MORS exhibited significantly lower NMG_div and NMG_nov and simultaneously higher NMG_rel (p-values < 2.6e-14). Note that while the decrease of selection's NMG_rel w.r.t. Beta may seem modest, it is due to a very narrow distribution of NMG_rel on impressions. Differences between selection and impression distributions naturally lead to the question of to what extent is this connected to the users' self-proclaimed propensity weights. Figure <ref> depicts distributions of propensity weights and NMGs[In order to make NMGs comparable with propensity weights, we re-scaled them to maintain unit sum object-wise.] of impressions and selections. Notably, for impressions, the distribution lacks the segment of very high/low values despite the demand expressed in propensity weights. This is mostly due to the existing covariance among selected objectives, which, e.g., prevents from finding items without certain levels of novelty. More importantly, note that both relevance and diversity metrics were under-represented in impressions, if compared to the propensity weights. However, while for relevance, users tend to balance this bias back by only rarely selecting items with very low relevance (see the spike towards the left side of Figure <ref>), for diversity, items with low NMG_div are over-sampled and thus the difference between selection behavior and self-proclaimed propensity amplifies. Despite the fact that users had the freedom of choosing objective weights at their discretion, their selection behavior significantly differs from the self-proclaimed propensity towards diversity, amplifying the existing bias of impression data. Seemingly, users overstated their propensity towards diversity. To some extent, we corroborated this hypothesis by analyzing the satisfaction of users as a factor of their propensity towards diversity. Users with a below-median propensity towards diversity on average selected more items from MORS recommendations (3.28 vs. 2.51, p-value 2.2e-7) and also provided higher overall ratings for MORS recommendations (2.84 vs. 2.64, p-value: 0.001). Originally, we expected that based on these inferior results, users would tend to converge towards lower weights for the diversity objective. However, no such evidence was found in the dataset. This observation could have several causes. The limited number of sessions might be simply insufficient to learn the dependencies between objective weights and self-perceived satisfaction, let alone that the dependence might vary through time as illustrated in Section <ref>. Nonetheless, the misconception or misunderstanding on the level of objective semantics and/or item's marginal gains w.r.t. these objectives may play an important role too. The post-study questionnaire provided some leads on this factor. User's overall user satisfaction (i.e., “Overall, I am satisfied with the recommender.”) was correlated with the information sufficiency (“The information provided for the recommended movies was sufficient to judge whether I gonna like them.”, Pearson's correlation: 0.42) and the ability to state one's preferences (“I was not able to describe my preferences w.r.t. relevance, diversity, and novelty.”, Pearson's correlation: -0.43). Also, while evaluating the user-perceived fulfillment of individual objectives, we found that positive answers on “The movies recommended to me matched my interests.” implied no sign. relations to the estimated relevances. Furthermore, while the positive answers on “The recommended movies were novel to me.” implied some increase of the novelty metrics (-0.0151 vs. -0.019 for mean popularity complement and 0.798 vs. 0.751 for temporal novelty), the magnitude of improvement was much higher for users who answered positively on “The recommended movies were diverse.”: -0.0145 vs. -0.0237 for mean popularity complement and 0.795 vs. 0.716 for temporal novelty. We can conclude the level of misconception between objective metrics and users' perception of these qualities is substantial. This is in line with the observations in related studies, e.g., <cit.>. Some parts of the post-study questionnaire suggest that this issue may be mitigated by better explanations (i.e., more informative description) of recommended items. One option would be to visualize the degree, to which items fulfill individual objectives. This would allow users to better link their perception with underlying metrics and, e.g., help to adapt the self-proclaimed propensities to this knowledge. § CONCLUSIONS AND LIMITATIONS In this paper, we conducted a user study focused on discovering the dependencies between users' interactions on items with certain beyond-accuracy properties and users' self-proclaimed propensities towards these beyond-accuracy criteria. We observed a considerable drift between both statistics and investigated the possible causes. We also provided some evidence of the benefits of MORS, despite not being the favored option from the user's (short-term) perspective. The study had several limitations which we plan to address in the future. First, only a modest volume of recommending sessions was conducted with no time in between. This prevents us from measuring the preference drifts <cit.> and/or contextual dependencies in long-term impact analysis. Also, users might not have enough time to stabilize their propensities towards individual objectives. Therefore, our future work should include studies with longer trial periods and sufficient time in between. Second, the choice of objectives as well as particular criteria might affect the results. We plan to address this by a future study with a wider set of beyond-accuracy objectives. Similarly, we plan to investigate the impact of particular GUIs for setting propensity weights. Third, the fact that Beta RS was trained w.r.t. all selections correspond to the situation, where an ensemble model is used. While this is plausible, we would also like to observe the effect of independent evolution for all RS. Last but not least, the utilized source dataset contains a rather modest volume of items, which may prove difficult to find suitable recommendations in sequential settings without repetition. Therefore, experiments on larger domains are planned as well. ACM-Reference-Format
http://arxiv.org/abs/2307.01181v1
20230703174623
Fitting an ellipsoid to a quadratic number of random points
[ "Afonso S. Bandeira", "Antoine Maillard", "Shahar Mendelson", "Elliot Paquette" ]
math.PR
[ "math.PR", "cs.DS", "cs.LG", "math.ST", "stat.ML", "stat.TH" ]
Fitting an ellipsoid to a quadratic number of random points Afonso S. Bandeira, Antoine Maillard, Shahar Mendelson, Elliot Paquette August 1, 2023 =========================================================================== We consider the problem ( P) of fitting n standard Gaussian random vectors in ^d to the boundary of a centered ellipsoid, as n, d →∞. This problem is conjectured to have a sharp feasibility transition: for any > 0, if n ≤ (1 - ) d^2 / 4 then ( P) has a solution with high probability, while ( P) has no solutions with high probability if n ≥ (1 + ) d^2 /4. So far, only a trivial bound n ≥ d^2 / 2 is known on the negative side, while the best results on the positive side assume n ≤ d^2 / (d). In this work, we improve over previous approaches using a key result of Bartl & Mendelson on the concentration of Gram matrices of random vectors under mild assumptions on their tail behavior. This allows us to give a simple proof that ( P) is feasible with high probability when n ≤ d^2 / C, for a (possibly large) constant C > 0. § INTRODUCTION We study the following question: given n vectors in ^d independently sampled from the standard Gaussian measure, when does there exist an ellipsoid centered at 0 whose boundary goes through all of the vectors? This question was raised by <cit.>, and has received significant attention recently <cit.>. We will discuss the motivations behind this problem and review some of the recent literature in Section <ref>. In the original series of work of Saunderson&al <cit.>, it was conjectured based on numerical experiments that the ellipsoid fitting property undergoes a phase transition in the limit d →∞ for n ∼ d^2 / 4. Notably, the threshold d^2/4 corresponds to the statistical dimension of the cone of positive semidefinite matrices <cit.> (see <cit.> for a discussion). Let n, d ≥ 1, and x_1, ⋯, x_n (0, _d/d). Let p(n, d) be defined as the probability of existence of a fitting ellipsoid centered in 0: p(n, d) [∃Σ∈_d : Σ≽ 0 and x_i^Σ x_i = 1 (∀ i ∈ [n])]. For any > 0, the following holds: lim sup_d →∞n/d^2≤1 - /4 ⇒lim_d →∞ p(n, d) = 1, lim inf_d →∞n/d^2≥1 + /4 ⇒lim_d →∞ p(n, d) = 0. Our main result gives a positive answer to the existence statement of Conjecture <ref>, up to a constant factor in n/d^2. We present its proof in Section <ref>. Let n, d ≥ 1, and x_1, ⋯, x_n (0, _d/d). Given any β≥ 1, there exist a (small) constant α = α(β) > 0 and a (large) constant C = C(β) > 0 such that for n ≤α d^2: [∃Σ∈_d : Σ≽ 0 and x_i^Σ x_i = 1 (∀ i ∈ [n])] ≥ 1 - C n^-β. From polynomial to exponential probability bounds – While we show a polynomial lower bound on the probability, as we will notice during the detailing of the proof, we believe that such a lower bound can be improved to an exponential lower bound of the type 1 - 2 exp(-Cd), for n ≤α d^2 and a universal constant α > 0. We highlight the principles of this improvement in the proof, and detail how it would require a slightly deeper dive into the arguments of the proof of the main result of <cit.>. Since the main conjecture of ellipsoid fitting only concerns the limit of the probability and not its scaling, we leave this improvement for future work, and will sometimes use probability estimates that are not the sharpest possible, but are sufficient for our goal. §.§ Motivation and related literature We give here a brief overview of the motivations to consider the ellipsoid fitting problem, as well as previous results on this conjecture. Despite the fact that Conjecture <ref> remains open, the ellipsoid fitting property is a natural question in random geometry. Notably, if the vectors x_1, ⋯, x_n satisfy this property, then there is no vector x_i lying in the interior of the convex hull of the other vectors (± x_j)_j ≠ i. Moreover, this problem has several connections with machine learning and theoretical computer science, which motivated its introduction. Examples of these connections include the decomposition of a data matrix into a sum of diagonal and low-rank components <cit.>, overcomplete independent component analysis <cit.>, or the discrepancy of random matrices <cit.>. Relations to these various problems are discussed more extensively in the introduction of <cit.>, to which we refer the interested reader for more details. The negative side of the conjecture – A dimension counting argument shows that ellipsoid fitting is generically not possible if n > d (d+1)/2, implying that the negative part of Conjecture <ref> is non-trivial only in the range d^2/4 ≲ n ≲ d^2/2. Despite the simplicity of this argument, d^2/2 is still the best-known bound on the negative side of Conjecture <ref>. Early results – In the original works that introduced the ellipsoid fitting conjecture <cit.>, it was proven that ellipsoid fitting is feasible with high probability if n ≲(d^6/5 - ) (for any > 0). This bound was improved to n ≲(d^3/2 - ) in <cit.>, where the result was obtained as a corollary of the proof of a Sum-of-Squares lower bound for the Sherrington-Kirkpatrick Hamiltonian of statistical physics[In the revised version of <cit.>, as well as in <cit.>, it was noticed that the results of <cit.> actually hold for n ≲(d^2 / (d)).], using a pseudo-calibration construction. Comparison with recent work – Our proof is based on an “identity perturbation” construction, an idea which was described in <cit.>, and used in <cit.> to prove that p(n, d) → 1 under the assumption that n = (d^2 / (d)). On the other hand, <cit.> uses a least-square construction to prove that ellipsoid fitting is possible with high probability under the similar condition n = (d^2 / (d))[We note that <cit.> was recently updated to present an alternative proof through the identity perturbation construction, again under the assumption n = (d^2 / (d)).]. Our proof follows in part the one of <cit.>, improving a crucial operator norm bound thanks to results of <cit.>. As mentioned in <cit.>, using a suboptimal bound on this operator norm was the main limitation that prevented the authors to prove the existence of a fitting ellipsoid for n ≤ d^2 / C. We emphasize that numerical studies <cit.> suggested that the identity perturbation construction is successful only in the range n ≲ d^2/10, so in order to resolve Conjecture <ref> (or even just the existence part) it appears a new idea is needed[Numerical simulations of <cit.> suggest the least-squares approach suffers from the same shortcomings.]. Parallel work – As we were finalizing the current manuscript, another proof that ellipsoid fitting is possible at a quadratic number of points was proposed <cit.>. Like our approach, the proof in <cit.> is based on the identity perturbation construction, but the proof techniques appear to us to be quite different: <cit.> relies on the theory of graph matrices, and as such strengthens similar arguments presented in <cit.> (while our proof can instead be viewed as a strengthening of the arguments in <cit.>). More specifically, our approach relies on obtaining a crucial bound on the operator norm of a kernel Gram matrix by mapping it to the Gram matrix of flattened rank-one matrices, and using the results of <cit.>. This latter work showed the concentration of the Gram matrix of i.i.d. vectors X_1, ⋯, X_n under the assumption that the first moments of the projections ⟨ X, u⟩ satisfy (uniformly in u) a ψ_α-like tail bound for some α∈ (0, 2]. §.§ The dual semidefinite program Note that ellipsoid fitting belongs to the class of random semidefinite programs, and as such admits a dual formulation. As we find the dual problem to have a particularly interesting formulation we include a short expository snippet to highlight this dual SDP, and the consequences of Theorem <ref> for it. Namely, it implies the following corollary. Let n, d ≥ 1, and x_1, ⋯, x_n (0, _d/d). Given any β≥ 1, there exist a (small) constant α = α(β) > 0 and a (large) constant C = C(β) > 0 such that for n≤α d^2: [∃ z ∈^n : ∑_i=1^n z_i = 0 and λ_max(∑_i=1^n z_i x_i x_i^) < 0 ] ≤ C n^-β. Corollary <ref> rewrites ellipsoid fitting as a problem of “balancing” rank-one matrices: we show that for n ≤α d^2 it is impossible to find a centered balancing of (x_i x_i^) such that the resulting matrix is negative definite (nor positive definite as one can always consider -z). We note however that duality doesn't play any explicit role in the proof of Theorem <ref>. By weak duality and Theorem <ref>, with probability at least 1 - Cn^-β for n ≤α(β) d^2, we have max_y ∈^n ∑_i=1^n y_i x_i x_i^≼ 0∑_i=1^n y_i = 0. We now condition on this event. Thus for all y ∈^n, if ∑ y_i > 0 then λ_max(∑_i=1^n y_i x_i x_i^) > 0. Let z ∈^n such that ∑_i=1^n z_i = 0. To prove Corollary <ref>, it suffices to show that λ_max(∑_i=1^n z_i x_i x_i^) ≥ 0. Let M(z) ∑_i=1^n z_i x_i x_i^. Let > 0, and y_i() z_i +. Since ∑ y_i > 0, there exists u_∈^d-1 (the Euclidean unit sphere in ^d) such that u_^ M(z) u_ + ∑_i=1^n ⟨ u_, x_i ⟩^2 > 0. Extracting a converging sub-sequence as → 0 by compactness, there exists u ∈^d-1 with u^ M(z) u ≥ 0. § PROOF OF THEOREM <REF> Notation – Positive universal constants are generically denoted as c_k or C_k, and may vary from line to line. We will explicit possible dependencies of such constants on relevant parameters when necessary. _d denotes the set of d × d real symmetric matrices, _d is the identity matrix, and _d is the all-ones vector. ^d-1 is the Euclidean unit sphere in ^d. Remark – Since the ellipsoid fitting has a clear monotonocity property with respect to n, we assume without loss of generality in what follows that n = ω(d^2-) for any fixed > 0. The polynomial exponent on the probability estimates, of the form n^-β, can be taken to be arbitrarily large but it will be considered fixed throughout, with β≥ 1, and as it will be clear below constants generally depend on β. §.§ Identity perturbation ansatz In the identity perturbation ansatz <cit.>, we look for a fitting ellipsoid Σ∈_d in the form: Σ = _d + ∑_i=1^n q_i x_i x_i^, for some q ∈^n. Having Σ≽ 0 is thus equivalent to: ∑_i=1^n q_i x_i x_i^≽ - _d. We denote x_i = √(d_i)ω_i, with ω_i [^d-1], and d_i x_i_2^2, and we let D ({d_i}_i=1^n) and Θ∈^n × n with Θ_ij⟨ω_i, ω_j ⟩^2. Note that d_i are i.i.d. variables, independent of the directions ω_i. Plugging the ansatz of eq. (<ref>) into the ellipsoid fitting equations x_i^Σ x_i = 1 yields: _n = D _n + D Θ D q. Assuming that D and Θ are invertible, this equation is solved by: q = D^-1Θ^-1 (D^-1_n - _n). Plugging it back into eq. (<ref>), we see that the identity perturbation ansatz gives a semidefinite positive solution to the ellipsoid fitting problem if Θ, D are invertible, and min_a ∈^d-1∑_i=1^n [Θ^-1 (D^-1_n - _n)]_i ⟨ a, ω_i ⟩^2 ≥ -1. §.§ Concentration of a kernel Gram matrix We use the following critical lemma on the concentration of the matrix Θ appearing in eq. (<ref>). Let n, d ≥ 1, and ω_1, ⋯, ω_n Unif[^d-1]. Let Θ_ij⟨ω_i, ω_j ⟩^2. For any β≥ 1, there are constants such that, with probability greater than 1 - n^-β - 2 exp(-c_0 n), the following occurs: Θ - Θ_ ≤C_1/d + C_2(β) (√(n/d^2) + n/d^2) Notice that Θ = (1-1/d)_n + (1/d) _n _n^. This lemma is a consequence of the analysis of <cit.>, and is proven in Section <ref>. Remark: improving the probability upper bound – A careful analysis of the proof arguments of <cit.> reveals that in the present case in which the matrix to control is a Gram matrix of sub-exponential vectors (which will be the case here as detailed in the proof), the probability estimate could likely be improved significantly to yield a probability lower bound of 1 - 2 exp(-c n). We leave for future work to carry out this improvement, and keep a formulation that follows directly from the results of <cit.>. We get the following corollary: Let n, d ≥ 1, and ω_1, ⋯, ω_n Unif[^d-1]. Let Θ_ij⟨ω_i, ω_j ⟩^2. For any β≥ 1, there exists α = α(β) > 0 and constants such that if n ≤α d^2 and d ≥ d_0(β), then with probability at least 1 - n^-β - 2 exp(-c_0 n): Θ^-1 - (_n - 1/n_n _n^)_≤C_1/d + C_2(β) √(n/d^2) + d/n. In particular, assuming n = ω(d), for all β≥ 1 there is α = α(β) > 0 such that if n ≤α d^2: [Θ^-1_≤ 2] ≥ 1 - 2n^-β. Note that Θ - [_n + (1/d) _n _n^]_ = (1/d), so that eq. (<ref>) also holds replacing Θ by _n + (1/d) _n _n^. We use the following elementary lemma, proven in Section <ref>. Let A, B ∈_n two symmetric matrices, such that B ≻ 0, and for some < λ_min(B) we have A - B _≤. Then A^-1 - B^-1_≤B^-1_^2/1 - B^-1_. Applying Lemma <ref> to B = _n + (1/d) _n _n^, such that λ_min(B) = 1, and B^-1 = _n - (d+n)^-1_n _n^, gives, with probability at least 1 - n^-β - 2 exp(-c_0 n): Θ^-1 - (_n - 1/n_n _n^)_ ≤Θ^-1 - (_n - 1/n+d_n _n^)_ + d/n, ≤C_1/d + C_2(β)(√(n/d^2) + n/d^2)/1 - C_1/d - C_2(β)(√(n/d^2) + n/d^2) + d/n, ≤C'_1/d + C'_2 √(n/d^2) + d/n. for large enough d and small enough n/d^2 (depending only on β). §.§ Reducing to a net We show some useful estimates in Section <ref>, summarized in the following lemma. Let ω_1, ⋯, ω_n [^d-1], and Θ_ij⟨ω_i, ω_j ⟩^2. Denote U(a)_i ⟨ω_i, a ⟩^2 for a ∈^d-1. We let (a_j)_j=1^N be a (1/2)-net of ^d-1. Let β≥ 1. There exists α = α(β) > 0 such that if n ≤α d^2, then we have: (i) [E_1] ≥ 1 - 2 exp(-C_1 d), with E_1 {max_j ∈ [N]U(a_j)_2 ≤ C_2} (for a sufficiently large C_2). (ii) [E_2] ≥ 1 - 2 n^-β, with E_2 {Θ^-1_≤ 2}. In the following, we fix (a_j)_j=1^N a (1/2)-net of ^d-1, such that N ≤ 5^d <cit.>. Let q̃ D^-1_n - _n. For any matrix M ∈^d × d, we have <cit.>: max_a ∈^d-1 a^ M a ≤ 2 max_a ∈ a^ M a. Therefore: [min_a ∈^d-1∑_i=1^n (Θ^-1q̃)_i ⟨ a, ω_i ⟩^2 ≤ -1] ≤[max_j ∈ [N]|∑_i=1^n (Θ^-1q̃)_i ⟨ a_j, ω_i ⟩^2 |≥1/2]. Defining g_Θ(a) ∑_i=1^n q̃_i [Θ^-1 U(a)]_i, our goal reduced to show that max_j ∈ [N] |g_Θ(a_j)| ≤ 1/2 with probability at least 1 - C n^-β, for n/d^2 small enough. First, we show that we can truncate and center the variables q̃_i: Let A_i { |q̃_i| ≤ 1 } and A ∩_i=1^n A_i. We denote r_i q̃_i | A, and y_i r_i - r_i. Then {y_i}_i=1^n are i.i.d. centered K/√(d)-sub-Gaussian random variables, for some universal K > 0. Moreover, for any β≥ 1 there exists α = α(β) > 0 such that if n ≤α d^2, then: [max_j ∈ [N]|∑_i=1^n (Θ^-1q̃)_i ⟨ a_j, ω_i ⟩^2 |≥1/2] ≤[max_j ∈ [N]|∑_i=1^n (Θ^-1 y)_i ⟨ a_j, ω_i ⟩^2 |≥1/4] + C n^-β. This lemma is proven in Section <ref>. §.§ Controlling points on the net In what follows, we replace the variables q̃_i by y_i, thanks to Lemma <ref> (assuming n ≤α d^2 for α = α(β) small enough). We define, for a ∈^d-1: f_Θ(a) ∑_i=1^n y_i [Θ^-1 U(a)]_i = ∑_i=1^n [Θ^-1y]_i U(a)_i, with U(a) (⟨ω_i, a ⟩^2)_i=1^n. We prove in Section <ref> the following elementary lemma: Let {y_i}_i=1^n be i.i.d. centered sub-Gaussian random variables, with y_1_ψ_2≤ K / √(d), and M ∈_n. Then: [M y_∞≥ C M_ d^-3/8] ≤ 2n exp{-d^1/4}. We let E_3 {Θ^-1 y_∞≤ C Θ^-1_ d^-3/8}, and E ∩_k=1^3 E_k. We have from Lemmas <ref> and <ref> that (recall that y is independent of Θ) there is α = α(β) > 0 such that for n ≤α d^2: [E] ≥ 1 - C n^-β. Let us fix a ∈^d-1. For η∈ (0,1) we define S(η) {i ∈ [n] : |⟨ω_i, a ⟩| > η}. Since ω_i [^d-1], |⟨ω_i , a ⟩| are i.i.d. sub-Gaussian random variables, with sub-Gaussian norm C / √(d) <cit.>. |S(η)| is thus a binomial random variable, with parameters n and p ≤ 2 exp{-C d η^2}. By Theorem 1 of <cit.>, |S(η)| is stochastically dominated by a Poisson random variable with parameter -n log (1-p). Assuming that d η^2 →∞, we have for d large enough[Since log(1-x) ≥ -2x for 0 ≤ x ≤ 1/2.], - n log (1-p) ≤ 2n p ≤ 4 n exp{-C d η^2}. Letting λ 4 n exp{-C d η^2} and X ∼Pois(λ), |S(η)| is thus stochastically dominated by X. We reach that for all x > λ (see e.g. Theorem 5.4 of <cit.> for the second inequality): [|S(η)| ≥ x] ≤[X ≥ x] ≤(eλ/x)^x e^-λ. We get from eq. (<ref>) that [|S(η)| ≥ d^1/4] ≤exp{d^1/4log (4ne) - C d^5/4η^2 -d^1/4log d/4}≤exp{d^1/4log n - C d^5/4η^2}. We decompose f_Θ(a) in two parts, which we control separately: f_Θ(a) = ∑_i ∈ S(η) [Θ^-1y]_i U(a)_i_ f_1(η,a) + ∑_i ∉ S(η) [Θ^-1y]_i U(a)_i_ f_2(η,a). First, we have that under the event E of eq. (<ref>), and by the Cauchy-Schwarz inequality: |f_1(η, a)| ≤ C d^-3/8∑_i ∈ S(η)⟨ω_i, a ⟩^2 ≤ C d^-3/8 |S(η)|. Let us pick η = d^-1/8 t, for some t ≥ 1 (so that d η^2 →∞). Using eq. (<ref>) in the previous inequality, as well as the law of total probability (and [E] ≥ 1/2), we reach: [|f_1(d^-1/8 t, a)| ≥ C_1 d^-1/8| E ] ≤ 2exp{d^1/4log n - C_2 d t^2}. We now control f_2(η, a). For a random variable X({y_i, ω_i}), we denote X_ψ_2,y the sub-Gaussian norm of the random variable with respect to the randomness of {y_i} only (i.e. conditioned on the value of {ω_i}). Since y_i are independent of {ω_i} (and thus of the choice of the set S(η) and of Θ), we get by Hoeffding's inequality (recall that y_i are i.i.d. K/√(d)-sub-Gaussian), that for all {ω_i}: f_2(η, a)_ψ_2,y^2 ≤C/dΘ^-1U(a)_2^2. Here we denoted U(a)_i ⟨ω_i, a ⟩^2 {|⟨ω_i, a ⟩| ≤η}. Therefore: f_2(η, a)_ψ_2,y^2 ≤C Θ^-1_^2/d∑_i ∉ S(η)⟨ω_i, a ⟩^4. We can then prove (see Section <ref>): For all q ∈ [1/2, 1], there is a constant C = C(q) > 0 such that for all v ≥ 0, and all η∈ (0,1): [∑_i ∉ S(η)⟨ω_i , a ⟩^4 ≥n/d^2(3+v)] ≤ 2 exp{-C min(n d^2/qη^4/q/d^4 η^8 v^2, n^q d^1-2qη^2-4q v^q) }. §.§ Ending the proof We detail now how the combination of eq. (<ref>) and Lemma <ref> allows to complete the proof. By Lemma <ref>, our task reduced to show that for a 1/2-net (a_j)_j=1^N of ^d-1, we have with probability at least 1 - Cn^-β, and assuming n ≤α d^2 for α = α(β) small enough: max_j ∈ [N] |f_Θ(a_j)| ≤ 1/4. Recall the decomposition of eq. (<ref>). We fix η = d^-1/8 t, for t ≥ 1 large enough (not depending on n, d) such that eq. (<ref>) gives, for n,d large enough: [|f_1(d^-1/8 t, a)| ≥ C d^-1/8| E] ≤ 10^-d. By Lemma <ref> and eq. (<ref>) we have, chosing v = 1 and q = 3/5[This is an arbitrary choice, the only requirement needed is actually that q ∈ (1/2,3/4).], that for all x > 0: [|f_2(d^-1/8 t, a)| ≥ x Θ^-1_√(n/d^2)] ≤_ω[exp(- Cn x^2/d∑_i ∉ S(η)⟨ω_i , a ⟩^4)], 2 exp{-C_1 min(n/t^4/3√(d) , n^3/5 d^-3/20 t^-2/5) } + exp(- C_2d x^2), 2 exp{-C_1 n^3/5 d^-3/20 t^-2/5} + exp(- C_2d x^2), 10^-d + exp(- C_2d x^2). where we used Lemma <ref> in ( a) with v = 1 and q = 3/5 (and bounding e^-z≤ 1), in ( b) the fact that n/√(d) = ω(n^3/5 d^-3/20) since n = ω(d), and finally in ( c) we used that n = ω(d^23/12), so that we can bound the first term by 10^-d for n,d large enough. We fix x > 0 large enough (not depending on n, d) such that the second term also satisfies exp(-C_2 d x^2) ≤ 10^-d. All in all, we get: [|f_2(d^-1/8 t, a)| ≥ C Θ^-1_√(n/d^2)] ≤ 2 × 10^-d. And thus: [|f_2(d^-1/8 t, a)| ≥ C √(n/d^2)| E] ≤[|f_2(d^-1/8 t, a)| ≥ C Θ^-1_√(n/d^2)]/[E]≤ 3 × 10^-d. Notice that the event E of eq. (<ref>) is independent of the net. Thus, we have for all u > 0: [ max_j ∈ [N] |f_Θ(a_j)| ≥ u] ≤ C n^-β + [ max_j ∈ [N] |f_Θ(a_j)| ≥ u | E]. Combining eqs. (<ref>) and (<ref>) with the union bound (recall N ≤ 5^d) we get: [ max_j ∈ [N] |f_Θ(a_j)| ≥ C_1 √(n/d^2) + C_2 d^-1/8| E ] ≤ 4 · 5^d · 10^-d≤ 4 · 2^-d. By combining eqs. (<ref>) and eq. (<ref>), taking d large enough, and n/d^2 small enough, this ends the proof of eq. (<ref>), and thus of Theorem <ref>. § AUXILIARY PROOFS §.§ Proof of Lemma <ref> We use the matrix flattening function, for M ∈_d: (M) ((√(2)M_ab)_1 ≤ a < b ≤ d, (M_aa)_a=1^d) ∈^d(d+1)/2, = ((2 - δ_ab)^1/2 M_ab)_a≤ b. It is an isometry: ⟨(M), (N) ⟩ = [MN]. Note that Θ is the Gram matrix of the i.i.d. vectors X_i (x_i x_i^) ∈^p, with p d(d+1)/2. Centering – Note that X_i_2 = x_i_2^2 = 1. Moreover, we have[We identify the matrices and their flattened versions.] [X_i] = _d / d, and if Y_i X_i - [X_i], then ⟨ Y_i, Y_j ⟩ = ⟨ X_i, X_j ⟩ - 1/d. Therefore, we can write Θ = H + 1/d_n _n^, with H_ij⟨ Y_i , Y_j ⟩ the Gram matrix of the (Y_i)_i=1^n. We also sometimes denote H = Y^ Y, with Y the matrix whose columns are given by Y_1, ⋯, Y_n. Note that [Θ] = (1-1/d)_n + (1/d) _n _n^. Thus, to prove Lemma <ref> it suffices to show that with the required probability bound: H - _n _≤C_1/d + C_2(β) (√(n/d^2) + n/d^2). Projecting – Note that ⟨ Y_i, (_d) ⟩ = 0, so that Y_i ∈{(_d)}^⊥. We denote P the orthogonal projector onto {(_d)}^⊥, i.e. P _p - 1/d(_d) (_d)^. We remark that (P Y_i)_i=1^n are still i.i.d., centered, and we have ⟨ P Y_i , P Y_j ⟩ = ⟨ Y_i, Y_j ⟩. Rescaling – Note that [Y_i] = 0, and without loss of generality (up to using the vectors Y'_i _i Y_i with _i Unif({± 1}), for which the Gram matrix H' satisfies H' = Diag() H Diag() and has thus the same eigenvalues as H) we can assume the Y_i to be symmetric. Let us compute the covariance of Y. For a ≤ b and c ≤ d, we have [Y_ab Y_cd] = [(2 - δ_ab)(2-δ_cd)]^1/2[(x_a x_b x_c x_d) - δ_abδ_cd/d^2], [(2 - δ_ab)(2-δ_cd)]^1/2/d^2[d/d+2(δ_abδ_cd + δ_acδ_bd + δ_abcd) - δ_abδ_cd], = 1/d^2[d/d+2(δ_abcd + [(2 - δ_ab)(2-δ_cd)]^1/2δ_acδ_bd) - 2/d+2δ_abδ_cd], = 2/d^2[d/d+2δ_acδ_bd - 1/d+2δ_abδ_cd]. In (a) we used the marginals of uniformly sampled random vectors on ^d-1, which can easily be obtained e.g. by using hyperspherical coordinates[ The two moments needed are d^2 [x_1^4] = 3 d/ (2 + d) and d^2 [x_1^2 x_2^2] = d / (d+2). ]. In matrix notation, eq. (<ref>) can be rewritten as: [Y Y^] = 2/d^2[d/d+2_p - 1/d+2(_d) (_d)^], = 2/d(d+2) P. Therefore, if we denote V_i P Y_i ∈^p-1 the coordinates of Y_i in {(_d)}^⊥, we have that ⟨ V_i, V_j ⟩ = ⟨ Y_i, Y_j ⟩, and [V V^] = 2/d(d+2)_p-1. Denote Σ (p-1) [V V^] = 2(p-1)/d(d+2)_p-1 = (1 - 1/d) _p-1. In particular Σ - _p-1_≤ (1/d). Letting Z Σ^-1/2 V, the vector Z satisfies [ZZ^] = (p-1)^-1_p-1, and the Gram matrix H_Z of Z_1, ⋯, Z_n satisfies H - H_Z = Z^ (Σ-_p-1) Z, and thus for all w ∈^p-1: |w^ H_Z w - w^ H w| = |w^ Z^ (Σ-_p-1) Z w|, ≤ (1/d) Z w _2^2, = (1/d) w^ H_Z w. Therefore H - H_Z_≤ (1/d) H_Z _. By the triangle inequality, this yields that H - _n _≤1/d + (1 + 1/d) H_Z - _n_. Using eq. (<ref>) and eq. (<ref>), it is clear that we conclude to eq. (<ref>), it is enough to show that (with the required probability bound): H_Z - _n _≤6/d + C(β) (√(n/d^2) + n/d^2). Gram matrix estimation – We will use the results of <cit.>. We need to introduce the definition of a well-behaved random vector: Let q ≥ 1. A random vector X ∈^q is said to be well-behaved for n ≥ 1 with constants L, R > 0, α∈ (0,2], δ∈ [0,1] and γ∈ [0,1) if: (i) X is symmetric and isotropic: [XX^] = _q. (ii) If one considers n i.i.d. draws X_1, ⋯, X_n, then with probability at least 1 - γ: max_1 ≤ i ≤ n|X_i_2^2/q - 1| ≤δ. (iii) For all 2 ≤ k ≤ R log n and all t ∈^q: ⟨ X, t ⟩_L_k≤ L k^1/α⟨ X, t ⟩_L_2 = L k^1/αt_2. Condition (iii) corresponds to some ψ_α behavior of the projections, uniformly in t, and for some α∈ (0,2], but only up to moments k = (log n). We can now state an immediate corollary to Theorem 1.5 of <cit.> (precisely the particular case corresponding to T being the unit sphere): Let n, q ≥ 1. Let β≥ 1. Assume that the random vector A∈^q is well-behaved with respect to n according to Definition <ref>, with constants L, R = R(β), α, γ, δ. Let M ∈^q × n be a matrix with i.i.d. columns A_1, ⋯ A_q. Then, with probability at least 1 - γ - 2exp(- c_0 n) - n^-β, we have 1/qM^ M - _n_ ≤ 2 δ + c(L, α, β) (√(n/q) + n/q). Corollary <ref> is an application of Theorem 1.5 of <cit.>, for the simplest case in which T = ^n-1, so that the Gaussian width is ℓ_⋆(T) g_2 ≃√(n) (for g ∼(0, _n)), d_T sup_t ∈^n-1t = 1, and k_⋆(T) (ℓ_⋆(T)/d_T)^2 ≃ n. More precisely, we have (1+ (n^-1)) n ≤ n^2 / (n+1) ≤ k_⋆(T) ≤ n. Note as well that we added the factor p^-1 in front of the Gram matrix M^ M (it is implicit in <cit.> because the columns of M there are A_i / √(p)). An important remark – We emphasize a technical point, related to the final probability bounds we obtain in Theorem <ref>. In what follows, we will apply Corollary <ref> with R = ∞, as the moment bound will be valid for all orders. In this context, the analysis of <cit.> would naturally imply that Corollary <ref> holds with probability at least 1 - γ - 2 exp(-c_0 n), and with a constant c(L, α) not depending on β. In turn, a more careful analysis would reveal that the probability bound of Theorem <ref> can be made exponentially small in d. However, as proving this would require a possibly lengthy technical analysis of the arguments of <cit.>, for reasons of clarity we chose to restrict to the most direct application of Theorem 1.5 of <cit.>, which gives then a sub-optimal polynomial probability upper bound. In order to deduce eq. (<ref>) from Corollary <ref>, with the dimension q = p-1 (recall p = d(d+1)/2), we need to verify that the distribution of the columns Z_i is well-behaved for some α, L, R, δ, γ. We let A_i √(q) Z_i, and we check that it satisfies Definition <ref>. Condition (i) – Because of the random sign that we can add wlog, we have seen that the distribution of A is symmetric. Moreover, by our analysis above, [A A^] = q [ZZ^] = _q, so that A is isotropic. Condition (ii) – Notice that for all i, Y_i_2^2 = V_i_2^2 = 1 - 1/d. Thus, with the notations from above: |1/qA_2^2 - 1| = |V^ (Σ^-1 - _q) V + 1/d|, ≤V_2^2 Σ^-1 - _q _ + 1/d, 3/d. In ( a) we used that Σ - _q_≤1/d⇒Σ^-1 - _q_≤1/d/1 - 1/d≤2/d. Thus A satisfies the condition (ii) with γ = 0 and δ = 3/d (since the bound is deterministic, there is no need to consider n i.i.d. samples). Condition (iii) – We are going to see that it actually holds for all k ≥ 2 with α = 1, i.e. the random vector A is uniformly sub-exponential. Let t ∈^q. Then[Again, since Σ-_q_≤ 1/d ⇒Σ^-1/2-_q_≤ 2/d.]: |⟨ A, t ⟩ - √(q)⟨ V, t⟩| = |√(q) V^ (Σ^-1/2 - _q) t| , ≤√(q)V_2 ×2/d×t_2, C t_2, using in ( a) that q + 1 = d(d+1)/2 and that V_2^2 = Y_2^2 = [(xx^ - _d/d)^2] = 1 - 1/d ≤ 1. We have then for all k ≥ 2: ⟨ A, t ⟩_k 2[q^k/2⟨ V, t ⟩_k^k + C^k t_2^k]^1/k , 2[√(q)⟨ V, t ⟩_k + C t_2], using in ( a) that (x+y)^k ≤ 2^k-1(x^k + y^k) for x, y > 0, and in (b) Minkowski's inequality (x+y)^1/k≤ (x^1/k + y^1/k). Therefore, it is enough to check that for all k ≥ 2: ⟨ V, t ⟩_k ≤L/d k^1/αt_2, for some α∈ (0,2]. We will use the Hanson-Wright inequality for random vectors on the sphere: Let d ≥ 1 and x ∼(^d-1). For any M ∈_d and any u > 0: [|d x^ M x - [M]| ≥ u] ≤ 2 exp{-C min(u^2/M_F^2, u/M_)}. Remark – We prove Lemma <ref> as a consequence of a general Hanson-Wright inequality for random vectors satisfying a convex Lipschitz concentration property <cit.>, easily satisfied by the Haar measure on ^d-1. We give details in Section <ref>. Recall that V = P Y ∈^q, with P the orthogonal projector onto (_d)^⊥, and that t ∈^q. If we identify t with the corresponding element of ^p (or the corresponding d × d symmetric matrix), then [t] = 0, and ⟨ V, t ⟩ = ⟨ Y, t ⟩ = x^ t x - [t]/d = x^ t x for x ∼(^d-1). Using Lemma <ref> with M = t gives: [d |⟨ V , t ⟩| ≥ u] ≤ 2 exp{-C min(u^2/t_2^2, u/t_)}. It is now classical to deduce the moments from the tails: d^k ⟨ V, t ⟩_k^k = ∫_0^∞ k u^k-1[d|⟨ V, t ⟩| ≥ u] u, ≤ 2 k ∫_0^∞ u^k-1exp{-C min(u^2/t_2^2, u/t_)} u, ≤ 2 k ∫_0^t_2^2/t_ u^k-1exp{-C u^2 / t_2^2} u + 2 k ∫_t_2^2/t_^∞ u^k-1exp{-C u / t_} u, ≤ 2 k ∫_0^∞ u^k-1exp{-C u^2 / t_2^2} u + 2 k ∫_0^∞ u^k-1exp{-C u / t_} u, ≤ k C^-k/2t_2^k Γ[k/2] + 2k (t_/C)^k Γ(k), ≤ k t_2^k { C^-k/2Γ[k/2] + 2 C^-kΓ(k)}, since t_≤t_2 = t_F. This is simply the sum of the sub-Gaussian and sub-exponential part of the tail given by Hanson-Wright's inequality. Thus we have d ⟨ V, t ⟩_k ≤ L k t_2, which is exactly eq. (<ref>) for α = 1. Applying Corollary <ref> to A = √(q)Z with L, R = ∞, α = 1, γ = 0, δ = 3/d, we reach that for all β≥ 1: Z^ Z - _n_≤6/d + C_1(β) (√(n/d^2) + n/d^2), with probability at least 1 - n^-β - 2 exp(-c_0 n). This implies eq. (<ref>) and concludes the proof. §.§ Proof of Lemma <ref> We use a generalization of Hanson-Wright's inequality (usually stated for i.i.d. sub-Gaussian vectors) which is due to <cit.>. Let n ≥ 1 and X be a random vector in ^n. We say that X has the convex concentration property with constant K if, for all φ : ^n → convex and 1-Lipschitz, we have |φ(X)| < ∞, and for every t > 0: [|φ(X) - [φ(X)]| ≥ t] ≤ 2 exp(-t^2/K^2). Note that if X = √(d) x, with x ∼[^d-1], then X satisfies Definition <ref> for some absolute constant K > 0 (the function φ does not even need to be convex), it is one of the most classical results of concentration of measure, cf. e.g. Theorem 5.1.4 of <cit.>. The main result of <cit.> is the following: Let n ≥ 1 and X be a zero-mean vector in ^n that has the convex concentration property with constant K. Then for all symmetric M ∈^n × n and t > 0: [|X^ M X - (X^ M X)| ≥ t] ≤ 2 exp(-C min(t^2/2K^4 M_F^2, t/K^2 M_)). Applying Proposition <ref> to the vector X described above yields Lemma <ref>. §.§ Tail bounds for chi2 random variables The following is a useful tail bound on χ_d^2 random variables, from <cit.>. Let d ≥ 1, and x_1, ⋯, x_d (0,1). Let z (1/d) ∑_i=1^d x_i^2. Then for all u ≥ 0: [z - 1 ≥ 2 √(u/d) + 2 u/d] ≤exp(-u), [z - 1 ≤ -2 √(u/d)] ≤exp(-u). Let d ≥ 1, and x_1, ⋯, x_d (0,1). Let z (1/d) ∑_i=1^d x_i^2, and we denote q̃ 1/z - 1. Notice that q̃≥ -1. Then, for all t ∈ (0,1): [|q̃| ≥ t] ≤ 2 exp(-d t^2/16). We start with the upper tail q̃≥ t. Notice that q̃≥ t ⇔ z ≤ (1+t)^-1. Using Lemma <ref> with 4 u = d [t / (1+t)]^2, we have (using that t < 1): [q̃≥ t] ≤exp{-d t^2/4(1+t)^2}≤exp{-d t^2/16}. Similarly, for the lower tail, q̃≤ -t ⇔ z ≥ (1-t)^-1. Using Lemma <ref> with 2 u = d [1 / (1-t) - √((1+t)/(1-t)) ], we have (again using that t ∈ (0,1)): [q̃≤ -t] ≤exp{-d/2[1/1-t - √(1+t/1-t)] }≤exp{-d t^2/4}. This ends the proof. §.§ Proof of Lemma <ref> Note that λ_min(A) ≥λ_min(B) -, so that A ≻ 0 and A^-1_≤B^-1_ / (1-B^-1_). We can use the standard estimate: A^-1 - B^-1_ = B^-1(B - A)A^-1_≤B^-1_A-B_A^-1_. Using the remark above and the fact that A - B_≤ completes the proof. §.§ Proof of Lemma <ref> The probability bound for the event E_2 is the conclusion of Corollary <ref>, so we focus on the bound for E_1. To control U(a)_2, we make use of the following tail bound <cit.>. Let q ∈ [1/2,1], and W_1, ⋯, W_n be i.i.d. centered random variables satisfying [|W_1| ≥ t] ≤ C_1 e^-C_2 t^q. Then for all t > 0: [|1/n∑_μ=1^n W_μ| ≥ t] ≤ 2 exp{-C(q) min(n t^2, (nt)^q)}. Lemma <ref> is a generalization of Bernstein's inequality for ψ_q tails, with q ∈ [1/2,1]. This lemma is stated in <cit.>, see Lemma 3.7 and eq. (3.7) there, and is a classical consequence of the same result for symmetric Weibull random variables <cit.>. We fix a ∈^d-1. Note that: U(a)_2^2 = ∑_i=1^n ⟨ω_i, a ⟩^4. Since ⟨ω_i, a ⟩ (ω_i)_1 by rotation invariance of the Haar measure on ^d-1, it is easy to check that [⟨ω_i, a ⟩^4] = (3/d^2)· d/(2+d) ≤ 3/d^2. Moreover, we have for all t ≥ 0 <cit.>: [⟨ω_i, a ⟩^4 ≥ t]≤ 2 exp{-C d √(t)}. Therefore, applying Lemma <ref> and using the union bound (recall N ≤ 5^d), we get: [sup_j ∈ [N]U(a_j)_2^2 ≥3n/d^2 + t] ≤ 2 exp{d log 5 -C min(d^4 t^2/n, d √(t))}. Taking e.g. t = (2 log 5 / C)^2, and since d^4/n = ω(d), we reach the conclusion. §.§ Proof of Lemma <ref> Note that q̃_i = 1 / d_i - 1 d/χ_d^2 - 1. We let r_i q̃_i | A_i. The A_i are independent, and by Corollary <ref>, [A_i] ≥ 1 - 2 exp(-d/16). By the law of total expectation and the union bound, we thus have: [max_j ∈ [N]|∑_i=1^n (Θ^-1q̃)_i ⟨ a_j, ω_i ⟩^2 |≥1/2] ≤[max_j ∈ [N]|∑_i=1^n (Θ^-1 r)_i ⟨ a_j, ω_i ⟩^2 |≥1/2] + 2n e^-d/16. Since q̃_i ≥ -1, for all x ∈: [r_i ≤ x] = [q̃_i ≤ x ∧ 1] / [q̃_i ≤ 1], and thus for all x ∈ (0,1), by Corollary <ref>: [r_i ≥ x] ≤[q̃_i ≥ x] ≤ 2 e^-dx^2/16, [r_i ≤ -x] ≤[q̃_i ≤ -x]/1 - 2 e^-d/16≤ 4 e^- dx^2/16. Moreover, [|r_i| > 1] = 0. r_i are thus i.i.d. sub-Gaussian random variables, with sub-Gaussian norm smaller than K / √(d). Moreover, by the law of total expectation: [q̃_i] = [r_i] (A_i) + [q̃_i {|q̃_i| ≥ 1}], so that since [A_i] ≥ 1 - 2 e^-d/16, and using Cauchy-Schwarz: |[q̃_i] - [r_i]| ≤ |[r_i]| · 2e^-d/16 + √(2)[q̃_i^2]^1/2 e^-d/32, 2 e^-d/16 + C e^-d/32 /√(d), using in ( a) that |r_i| ≤ 1 and that [q̃_i^2]^1/2≤ C/√(d). Since q̃_i = 2 / (d-2), we get | r_i| ≤3/d. Recall that y_i = r_i - r_i. Therefore we have, for all a ∈^d-1: |∑_i=1^n [Θ^-1(y - r)]_i ⟨ω_i, a ⟩^2| ≤ r_2Θ^-1_U(a)_2, ≤3 √(n)/dΘ^-1_ U(a)_2. Using Lemma <ref>, it is clear that if n ≤α d^2 for α = α(β) > 0 small enough, we have [max_j ∈ [N]|∑_i=1^n (Θ^-1 r)_i ⟨ a_j, ω_i ⟩^2 |≥1/2] ≤[max_j ∈ [N]|∑_i=1^n (Θ^-1 y)_i ⟨ a_j, ω_i ⟩^2 |≥1/4] + C n^-β. Combining eqs. (<ref>) and eq. (<ref>) gives the sought result. Finally, (y_i)_i=1^n are i.i.d. centered sub-Gaussian random variables with sub-Gaussian norm K / √(d). §.§ Proof of Lemma <ref> Let M ∈_n, and denote z M y. By Hoeffding's inequality, for all i ∈ [n]: [|z_i| ≥ t] ≤ 2 exp{-Cdt^2/M_i_2^2}≤ 2 exp{-Cdt^2/M_^2}, with (M_i)_i=1^n the rows of M, since M_≥max_i ∈ [n]M_i_2. Thus by the union bound: [z_∞≥ t] ≤ 2 n exp{-Cdt^2/M_^2}. Letting t = C M_ d^-3/8 ends the proof. §.§ Proof of Lemma <ref> Let q ∈ [1/2,1]. Recall that ∑_i ∉ S(η)⟨ω_i , a ⟩^4 = ∑_i = 1^n ⟨ω_i , a ⟩^4 {| ⟨ω_i, a ⟩| ≤η} We let z_i ⟨ω_i , a ⟩^4 {| ⟨ω_i, a ⟩| ≤η}. They are i.i.d. random variables, with [z_i] ≤[⟨ω_i, a ⟩^4] ≤ 3/d^2, and for all t ≥ 0: [z_i ≥ t] ≤min[2 e^-Cd √(t), {t^1/4≤η}], ≤ 2 exp{-C d η^2 - 4 q t^q}. Consequently z'_i = z_i d^1/qη^2/q - 4 satisfy [z'_i ≥ t]≤ 2exp{-C t^q}. We use again Lemma <ref> to get: [∑_i=1^n z_i ≥ n [z_i] + n d^-1/qη^-2/q + 4 t ] ≤ 2 exp{- C_q min(nt^2, (nt)^q)}. This last inequality can be rewritten as, for all v ≥ 0: [∑_i=1^n z_i ≥n/d^2 (3+v) ] ≤ 2 exp{- C_q min(n d^-4+2/qη^4/q-8 v^2, n^q d^1-2qη^2-4q v^q)}. Acknowledgements – The authors are grateful to Tim Kunisky, from whom they learned about this problem, and to Joel Tropp for insightful discussions. alpha
http://arxiv.org/abs/2307.02648v1
20230705204531
Noise-dissipation relation for nonlinear electronic circuits
[ "Léopold Van Brandt", "Jean-Charles Delvenne" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
AIP/123-QED Noise-Dissipation Relation for Nonlinear Electronic Circuits]Noise-Dissipation Relation for Nonlinear Electronic Circuits [email protected] Institute for Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM) UCLouvain, Louvain-la-Neuve, Belgium An extension of fluctuation-dissipation theorem is used to derive a `speed limit' theorem for nonlinear electronic devices. This speed limit provides a lower bound on the dissipation that is incurred when transferring a given amount of electric charge in a certain amount of time with a certain noise level (average variance of the current). This bound, which implies a high energy dissipation for fast, low-noise operations (such as switching a bit in a digital memory), brings together recent results of stochastic thermodynamics into a form that is usable for practical nonlinear electronic circuits, as we illustrate on a switching circuit made of an nMOS pass gate in a state-of-the-art industrial technology. [ Jean-Charles Delvenne August 1, 2023 ========================= Electronic circuits operate at an energetic cost —under the form of dissipation in resistive parts of the circuit— that brings technological or economical costs (need for cooling, large batteries, etc.), as well as ecological issues <cit.>. The search for more energy-efficient devices sometimes runs into the problem of reliability, as the intrinsic noise level may be become non-negligible at low voltages <cit.>. In parallel, there is a quest for theoretical lower bounds on dissipated energy and noise valid for a range of physical systems (electronic or else) fulfilling certain tasks, notably related to computation. One early such bound is Landauer's bound <cit.>, that states that erasing a bit necessarily dissipates an energy T ln 2, in whatever technology. Stochastic thermodynamics <cit.> offers a recent framework where bounds beyond Landauer's can be formulated and proved rigorously for broad classes of physical systems. Such bounds include the Thermodynamic Uncertainty Relations <cit.>, which state that stationary systems exhibiting a low level of noise must necessarily dissipate a lot — in other words, suppressing noise is costly. Another family of bounds, the (Classical) Speed Limits <cit.>, provide specific trade-offs between the time of an operation (e.g. writing a bit into a memory) and its dissipation, quantifying precisely the well-known observation that fast operations cost more. The existing Speed Limits are not always straightforward to use for electronic circuits, as they rely on assumptions on the form of noise (e.g. discrete jumps) not always satisfied in practice. A recent Speed Limit <cit.> is naturally applicable to electronic circuits, but does not take noise into account, missing an essential component of the dissipation-time-noise trade-off. We propose such a trade-off applicable to any linear or nonlinear resistive device, regardless of the noise model. It is a relationship between the total average charge passing through a nonlinear resistive device over a time interval, the energy dissipated in the device, and the total noise (variance) over that interval. We suppose that the nonlinear device is purely resistive with negligible internal dynamics, but is otherwise arbitrary. The device is embedded into an arbitrary circuit. For illustrative purposes and without loss of generality, we assume the simple circuit represented in <ref>. This circuit may, for instance, be interpreted as writing a bit from logical 0 to logical 1 by transferring a certain amount of charge into the capacitance. A (nonlinear) resistive device at temperature T is intrinsically noisy due to the random agitation of the charge carriers (electrons). Indeed, when subjected to a constant voltage difference applied externally during a time , it is traversed by a random charge , dissipating on the way an energy =. Charges during distinct time intervals are statistically independent, i.e. the noise on current is white (possibly Gaussian, or Poisson, or else). For a linear device, mean and variance of are related by the celebrated fluctuation-dissipation theorem <cit.>, which translates into the also famous Johnson-Nyquist formula <cit.> for a linear resistor, holding for all equilibrium (= 0) and non-equilibrium conditions <cit.>. Arbitrary nonlinear devices follow a more general relationship linking all moments, the fluctuation relation <cit.>, a consequence of which is the Thermodynamic Uncertainty Relation <cit.>: {}/kT≥ 2 {}^2/{}. In (<ref>), both the mean {} and the variance {} are proportional to (due to whiteness of the noise), hence both sides of (<ref>) scale as ∝. It was shown that the relation is tight (i.e. is an equality) if the random fluctuations are symmetric around the mean, e.g. for Gaussian fluctuations <cit.>. When the voltage across the device is time-varying (non-stationary conditions, for instance in a switching digital circuit) and random, we may still apply (<ref>) to a small or infinitesimal interval within the whole time interval [t_0,t_0+], and conditionally to a given value (t). This leads naturally to, for any voltage difference (t)= over any infinitesimal time interval [t,t+]: {|(t)=}/kT≥ 2 {|(t)=}^2/{|(t)=}. Because we are interested in the unconditional mean dissipation {}= _{|(t)=} (where _ refers to averaging over all values of ), we may write {}/kT≥ 2 _{{|(t)=}^2/{|(t)=}}. However, we would like to obtain an expression involving unconditional means and variances, more accessible and interpretable than their conditional variants. To that purpose, we can define a scalar product between arbitrary (square integrable) real-valued functions f() and g() ⟨ f|g ⟩=_{ f()g() } As all scalar products, it satisfies the well known Cauchy-Schwarz inequality ⟨ f|f ⟩≥⟨ f|g ⟩^2 / ⟨ g|g ⟩. Let us define f≡{|(t)=}/√({|(t)=}) and g ≡√({|(t)=}), and apply Cauchy-Schwarz inequality. We recognize ⟨ f|f ⟩ as the r.h.s. of (<ref>), up to the factor 2. Furthermore, we observe from the law of total variance that {} = _{|(t)=} + _{|(t)=} =_{|(t)=} + (^2) , the latter term being thus negligible. We finally get: {}/kT≥ 2 {}^2/{}. Integrating (<ref>) over the whole time interval [t_0,t_0+], we obtain a lower bound on dissipation, which is our first main result: {}/kT≥ 2 ∫_t_0^t_0 + {}^2/{}. In (<ref>), {} is the average energy dissipation over the whole time interval [t_0,t_0+], i.e. {} = ∫_t_0^t_0 + {} = ∫_t_0^t_0 + {(t) (t)} . We can relax further this inequality by applying again the Cauchy-Schwarz inequality. We now use the scalar product ⟨ f|g ⟩'=∫_t_0^t_0+ f' g' on real-valued functions f'(t) and g'(t) of time over the interval [t_0,t_0+Δ t]. We apply it to f'(t)≡{(t)} / √({(t)}) and g'(t) ≡√({(t)}) to obtain {}/kT≥ 2 {}^2/∫_t_0^t_0+{}=2 {}^2/{} , which is our second main result. The time-averaged variance {} is ∫_t_0^t_0+{}/. Although this bound is not as tight as (<ref>), it may prove easier to evaluate and interpret in many cases. In particular, =∫_t_0^t_0+ in the numerator is the total charge passed through the device over the interval, which is usually the quantity of interest. In summary, for a given , fast charge transfer (low ) and/or low-noise process (small {}) implies large dissipation. The relationship (<ref>) can indeed be seen as a novel speed limit relation: passing a certain charge over a device with a typical level of noise {} within a duration necessarily dissipates an energy that is inversely proportional to . This speed limit differs from other recent speed limits recently obtained that also obtain a 1/ behaviour. For instance, most speed limits <cit.> only apply to discrete jumps of charges (e.g. shot noise), while we cover all noise sources, discrete or continuous with a single formula. Let us also mention the recent deterministic speed limit <cit.>, ≥≡^2/ , which does not directly take noise into account. In (<ref>), denotes the time-averaged conductance of the driving device. Finally, we also see that for a given , a low noise level (low variance) can only be obtained at the cost of high dissipation. Our bounds <ref> thus express a trade-off between speed, dissipation and noise for an arbitrary nonlinear resistive device. Our preliminary application is the case where the driving device in <ref> is a linear resistor of conductance G, that can be covered in detail analytically. From Ohm's law, {dq(t)|(t)=} = G. Johnson-Nyquist's formula <cit.> is an electrical equivalent of Einstein's diffusion law <cit.>, and an avatar of the general fluctuation-dissipation theorem <cit.>, stating here that (t)=2kT G dt. Note in passing that Johnson-Nyquist's formula is often expressed in the electronic literature for the stationary random current i(t)=dq/dt for a constant . This current is defined (as a random function of time) only in a weak sense, i.e. if we limit it to a finite frequency bandwidth Δ f. The variance is then {i(t)}= 4kTG Δ f. In this case we can interpret the current defined by i(t)=dq(t)/dt (for a small, non-infinitesimal ) as being limited to the frequency band [0,1/2]. The system is thus a linear system with an external driving v_IN(t) and an internal noise, described by a linear stochastic differential equation (a Langevin equation more precisely), which can be solved explicitly. In this context, (<ref>) is satisfied with equality, and our main result (<ref>) becomes: ≥ = {}^2/G, which is precisely (<ref>), the minimum dissipation from the deterministic speed limit<cit.>, for a constant capacitance load and by identifying = G. In the linear case, thanks to Johnson-Nyquist's noise model (a specific case of the fluctuation-dissipation theorem) turning <ref> into equalities, our bound coincides with a deterministic treatment<cit.>. We shall now see it is not the case in a nonlinear case. Because the scope and the significance of our main theoretical results <ref> are only fully appreciated in circuits involving a nonlinear device, we illustrate their application to the circuit of <ref>: a constant capacitor is charged through a pass gate, implemented with a MOS (Metal-Oxide Semiconductor) transistor. The simulated signals and extracted quantities are depicted in <ref>. The charging process is, without loss of generality, here ensured by the linear input voltage ramp of amplitude = 1 sketched in <ref>(a), corresponding to the supply voltage of the used CMOS technology. The bit writing operation is assumed finished when (t) reaches V_1 = 80 (shown in (<ref>(a)). In the most advanced technologies favoured for digital circuit design <cit.>, the noise model of the transistor can be very complex (since, in the charge-based modelling approach, it depends on the small-dimension effects part of the deterministic static model <cit.>) and not known in an insightful closed form. We therefore avoided to resort to a simplified and inaccurate analytical model of the transistor and we have performed SPICE noise simulation in the time domain (<ref>), compatible with the process design kit provided by the semiconductor foundry <cit.>. Industrial simulators, such as Eldo®, provide a transient noise analysis tool <cit.>. All noise sources are modelled as Gaussian, irrespective of existing discussions of the physics origin of the thermal noise and the thermodynamic inconsistency of the pure Gaussian model <cit.>. Individual noise source are dynamically generated in the time domain according the compact model of the device, and the noise is then handled as any other electrical signal during a transient simulation. The intrinsic noise is the noise generated by the MOS transistor of <ref>. Within SPICE formalism, the intrinsic noise of the MOS transistor, that is the noise produced by the device itself, is modelled as a current Gaussian fluctuation. One sample path of i(t) (among 1000 generated and processed) is represented in <ref>(b), as well as the empirical mean. Whereas the physical white noise is, neglecting quantum effects, of infinite bandwidth, the noise is generated within a specified finite bandwidth, denoted , for the transient simulation. The must be selected in order to capture the dominant effect of the thermal noise on the charge transfer, extracted as about 200, beyond the bandwidth of the circuit of <ref> for different from 0 to 1. We have recorded 1000 sample paths (or trajectories) or traces) of the voltage and current signals simulated in the time domain. A specified imposes a constraint on the (maximum) simulation time step () to be used <cit.>: = 1/(2 ), according to Shannon-Nyquist sampling theorem. The time step = 2.5 is small enough to be numerically assimilated to an infinitesimal time interval, both to compute the charge increment (t) = i(t) = C ((t+) - (t)) and the different integral quantities. The sample mean and variance (of i(t) in <ref>(b) and of (t) in <ref>(c)) are unbiased empirical estimators <cit.>, and the variability observed for these statistics in <ref>(b,c) could of course be reduced at the expense of an increased number of simulated paths. In <ref>(c), we first depict the empirical estimators of the two important quantities of (<ref>), {(t)}^2 and {(t)} (left y axis). Generally, we expect from the relations <ref> that the ratio {}^2/{} predicts the trend of the evolution over time of the dissipation, i.e. that the dissipation is locally large when the intrinsic noise fluctuations (relative to the average instantaneous charge transfer) is low (and conversely). The discrepancy over time between the actual energy dissipation and the ratio {}^2/{} (right y axis) tells us how tight we can expect the lower bounds <ref> to be (see coming results), and thereby assesses the energy efficiency of the charge transfer through the nonlinear dissipative device (of some intrinsic noise level) for the considered input signal and speed. By inspecting first <ref>(a), we can basically distinguish two regimes of the different energy efficiency for the charge transfer, characterized by a larger or lower conductance of the nonlinear driving device. For the (important) special case of the MOS transistor (<ref>), the regime may be referred to as inversion level (or region) <cit.>. In the first (I) regime, the charging process is efficient in the sense that (t) closely follows (t) (the so called quasi-adiabatic conditions <cit.>) and the dissipation is low. For the case illustrated in <ref>, the speed of the charging process, /, is low and constant during regime I, which simplifies the interpretation of the general results <ref>. The transistor is in strong inversion, meaning that it is highly conductive and also that the intrinsic current noise is large <cit.>. This is indeed what we observe in <ref>(b,c): {i(t)} (or {(t)}) is large and drops gradually (while {(t)}^2 does not vary much). In the second (II) regime, the transistor falls in weaker inversion (low conductance) and (t) painfully follows (t) <cit.> (“the nMOS is not good at passing a 1” <cit.>). The dissipation is large (see the gray shark-fin-shaped peak in <ref>(c)) as evidenced by the high-amplitude intrinsic noise fluctuations in this regime. We believe that this circuit application highlights the existing correlation between fluctuation and dissipation trend, yet in dynamic and nonlinear conditions. Finally, let us emphasize that, in <ref>(c), the discrepancy between actual dissipation and {}^2/{} becomes significant after 10, precisely when the charge transfer gets even more inefficient. In <ref>, we summarize the dissipation-related quantities extracted from the transient simulation presented in <ref>. The expected or average dissipation, computed according to the definition (<ref>) is the reference value to which we compare our different lower bound. These are listed in ascending order in <ref>, which is consistent with their order of appearance in the text. The integral bound (<ref>) is lower than the actual dissipation only by a factor 4.4. At the light of <ref>(c), we have attributed such discrepancy to the second regime of the charging process, where the nMOS transistor becomes highly inefficient in fully passing a logical 1. The worsened conductivity (larger dissipation) is correlated with a lower noise level, reality reflected in the relationship (<ref>). Importantly, the reported discrepancy reveals the non-Johnson-Nyquist nature of the noise fluctuation-dissipation process of the nonlinear device <cit.> (as opposed to the linear resistor). We know that the shot noise model is consistent with a MOS transistor operation in weak inversion <cit.>. In case of shot noise<cit.>, the relation (<ref>) is loose by a factor of 4 for a voltage difference ≈ 200 mV (i.e., about 8 times the thermal voltage T/, see Figure 1 in the reference<cit.>), which is broadly consistent with our numerical observations. This shows that the fact that even though our bound holds for any model of noise, different models will make the bound more or less tight. The lower bound (<ref>) is, as announced earlier and here verified experimentally, less tight than (<ref>) as lying more than one order of magnitude below the dissipation. To conclude, we have exploited the theoretical framework of the stochastic thermodynamics to propose noise-dissipation relations valid for in non-equilibrium and non-stationary conditions, relevant for switching digital circuits that are strongly nonlinear. Two different lower bounds were provided for the energy dissipation. The relations involves the time-varying statistics of the noise over the charge transfer process. We have applied and discuss them over linear and nonlinear dipoles. The quantitative analyses deduced from the simulations are insightful about the physical origin of the noise, that cannot merely be pure Johnson-Nyquist noise for nonlinear dissipative devices <cit.>. Further work would deepen this aspect, in link with recent noise measurement and modelling work <cit.>, and would extend the mathematical formalism to more complex circuits with multiple dissipative and dynamic components (e.g. CMOS logic gates). The work has been supported by the Research Project "Thermodynamics of Circuits for Computation" of the National Fund for Scientific Research (F.R.S.-FNRS) of Belgium. The authors would like to thank Prof. Denis Flandre, Prof. David Bol, Mr. Martin Lefebvre and Mr. Adrian Kneip for the valuable discussions that contributed to this work.
http://arxiv.org/abs/2307.01237v1
20230703121728
Dynamical Graph Echo State Networks with Snapshot Merging for Dissemination Process Classification
[ "Ziqiang Li", "Kantaro Fujiwara", "Gouhei Tanaka" ]
cs.LG
[ "cs.LG", "cs.SI" ]
The GDGESN with snapshot merging for DPC tasks Z. Li et al. International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan {ziqiang-li,kantaro}@g.ecc.u-tokyo.ac.jp Department of Computer Science, Graduate School of Engineering, Nagoya Institute of Technology, Nagoya 466-8555, Japan [email protected] Dynamical Graph Echo State Networks with Snapshot Merging for Dissemination Process Classification Ziqiang Li10000-0002-7208-9003 Kantaro Fujiwara10000-0001-8114-7837 Gouhei Tanaka1,20000-0002-6223-4406 Received date ; Accepted date =========================================================================================================== The Dissemination Process Classification (DPC) is a popular application of temporal graph classification. The aim of DPC is to classify different spreading patterns of information or pestilence within a community represented by discrete-time temporal graphs. Recently, a reservoir computing-based model named Dynamical Graph Echo State Network (DynGESN) has been proposed for processing temporal graphs with relatively high effectiveness and low computational costs. In this study, we propose a novel model which combines a novel data augmentation strategy called snapshot merging with the DynGESN for dealing with DPC tasks. In our model, the snapshot merging strategy is designed for forming new snapshots by merging neighboring snapshots over time, and then multiple reservoir encoders are set for capturing spatiotemporal features from merged snapshots. After those, the logistic regression is adopted for decoding the sum-pooled embeddings into the classification results. Experimental results on six benchmark DPC datasets show that our proposed model has better classification performances than the DynGESN and several kernel-based models. § INTRODUCTION The dissemination process is used to describe the spreading of information (e.g. fake news and rumors) or infectious diseases (e.g. Covid-19 and meningitis) within a community. Since dissemination patterns of virus strains or different rumors are various, it is hard to recognize them within a relatively short period of time accurately. Based on this background, Dissemination Process Classification (DPC) is a highly-demanded technology for experts in relevant fields to distinguish them before carrying out possible interventions and countermeasures. Normally, dissemination processes can be represented by temporal graphs with dynamic connections and temporal signals. We show an example of an epidemic spreading in Fig <ref>, where G(t) means the t-th snapshot of the temporal graph 𝒢. We can notice that uninfected people (marked in black) can be infected probabilistically by contact with infected people (marked in red). Usually, one kind of epidemic has its own basic reproduction number, which leads to different dissemination processes. Generally, DPC can be turned into a Discrete-time Temporal Graph (DTG) classification task. To deal with this task, advanced deep learning models  <cit.> designed by combining variants of Graph Convolutional Networks (GCNs) <cit.> with those of Recurrent Neural Networks (RNNs) <cit.> and/or the attention mechanism <cit.> are widely considered to be ideal choices. The common ground of these models in structure is leveraging multiple graph convolution layers for extracting spatial features and using recurrent layers or attention layers for mining the temporal relationships. In this regard, high computational costs need to be spent to obtain a well-trained complex model. Furthermore, extra efforts for solving gradient explosion and vanishing problems are unavoidable. Another direction is to use some transformation methods to stitch the sequential snapshots into a large-scale static graph and then apply some graph kernel methods (i.e. the Weisfeiler-lehman graph kernels <cit.>) to generate the final classification results. Methods following this direction can avoid effects on capturing the temporal dependency in the snapshot sequence, but the large-scale static graphs lead to significantly high computational costs in the calculations of the gram matrix for a support vector machine <cit.> by using those graph kernels <cit.>. Reservoir Computing (RC) <cit.> is an efficient framework derived from RNNs, which maps the sequential inputs into high dimensional spaces through a predetermined dynamical system. This characteristic enables the training costs of its derived models to be remarkably lower than those of fully-trained RNNs. The Echo State Network (ESN) <cit.>, as one of the representative models of RC, and its variants have been intensively studied for handling various time series processing tasks <cit.>. Recently, D. Tortorella & A. Micheli successfully extended the standard ESN to a novel RC model called Dynamical Graph Echo State Network (DynGESN) <cit.>, which is capable of dealing with discrete-time temporal graphs processing tasks. A recent work has demonstrated that DynGESN outperforms some kernel-based methods on a number of DPC benchmark datasets <cit.>. However, we noticed that only one single dynamical characteristic included in the original temporal graphs is extracted in DynGESN <cit.>. Obviously, this monotonous strategy may hinder the model from extracting diverse dynamical characteristics extended from the original temporal snapshots. To solve this problem, a new model, Grouped Dynamical Graph Echo State Network (GDGESN), is proposed for DPC tasks in this study. This model can extract various spatiotemporal features from augmented inputs by group-wise reservoir encoders and generate accurate dissemination classification results by a linear classifier efficiently. In this regard, we propose a simple augmentation strategy called snapshot merging to generate multi-timescale temporal graphs and then leverage the multiple-reservoir framework <cit.> to build the group-wise reservoir encoders. We execute experiments for comparing the classification performances with those of the DynGESN and some kernel-based methods on six benchmark DPC datasets. The experimental results show that the accuracies of our model are higher than those of DynGESN and are close to those of kernel-based methods on some DPC datasets, which manifests the GDGESN owns relatively high effectiveness in dealing with DPC tasks. The rest of this paper is organized as follows: The preliminary about temporal graphs is introduced in Section <ref>. The proposed method is described in Section <ref>. The analysis about the computational complexity of the proposed model is presented in Section <ref>. The details of the experiments are introduced in Section <ref>. The discussion is given in Section <ref>. § PRELIMINARIES Generally, a discrete-time temporal graph is composed of a sequence of snapshots, which can be denoted by 𝒢={ G ( t ) } _t=1^N_T, where G ( t ) is the snapshot at time t and N_T is the length of 𝒢. The snapshot G ( t ) = {𝐯 ( t ), 𝐀 ( t ) } contains a time-varying vertex signal vector 𝐯 ( t )∈ℝ^N_V and the corresponding adjacency matrix 𝐀 ( t )∈ℝ^N_V× N_V, where N_V is the number of vertices. The state value of the i-th vertex at time t can be represented by v_i(t)∈ℝ. For representing the dissemination process, we define that v_i(t) =1 if the i-th person is affected at time t and v_i(t) =0 otherwise. We suppose that each graph is undirected, which can be represented by A_i,j ( t ) = A_j,i ( t ) = 1 if there is a contact between the i-th person and the j-th person, A_i,j ( t ) = A_j,i ( t ) = 0 otherwise. Moreover, we assume that a dissemination process classification dataset has N_S temporal graphs and the corresponding labels, which can be represented by {𝒢_s, 𝐲_s} _s=1^N_s, where 𝐲_s∈ℝ^N_Y is the label represented by the one-hot encoding for 𝒢_s. § THE PROPOSED MODEL A schematic diagram of the GDGESN is shown in Fig. <ref>. This is a case where a dissemination process represented by a discrete-time temporal graph is fed into the GDGESN with three groups of reservoir encoders. We can notice that the model consists of three components, including a merged snapshot converter, a set of multiple-reservoir encoders, and a linear classifier. In the merged snapshot converter, a DTG is transformed into three new DTGs with different window sizes. In the multiple-reservoir encoder, each transformed temporal graph is fed into the corresponding group-wise reservoir encoders for generating various vertex embeddings. In the linear decoder, the aggregated embeddings of the last time step obtained by the sum-pooling operation are collected from all reservoir encoders and then decoded into the classified results. The details about the above-mentioned components are introduced in Sections <ref>, <ref>, and <ref>, respectively. §.§ The merged snapshot converter The merged snapshot converter is proposed to merge several neighboring snapshots into one merged snapshot. To this end, we define a window that slides on the zero-padded snapshot sequence. We denote the size of the sliding window by ω. For simplicity, we fix the stride of this sliding window to be one. In order to keep the length of the merged snapshot sequence the same as that of the original snapshot sequence, we add (ω-1) empty snapshots into the beginning of the original snapshot sequence, which can be formulated as follows: 𝒫_s = {G_nil,…,G_nil_ω-1,G ( 1 ),G ( 2 ),…,G ( N_T ) }, where 𝒫_s means the s-th snapshot-padded sequence with length (N_T+ω-1) and G_nil represents the empty snapshot whose signal value of each vertex is zero. We assume that the merged temporal graphs corresponding to N_G different sizes of the sliding windows can be organized into N_G groups. Therefore, we represent the size of the sliding window corresponding to the g-th group as ω^(g) for g = 1, 2, …, N_G. Based on the above settings, We can merge snapshots into a new snapshot by executing the logical OR operation within a sliding window with size ω^(g), which can be formulated as follows: v_i^ ( g ) ( t ) = v_i ( t-ω^ ( g ) +1 ) ∪ v_i ( t-ω^ ( g ) +2 ) ∪…∪ v_i ( t ), A_i,j^ ( g ) ( t ) = A_i,j ( t-ω^ ( g )+1 ) ∪ A_i,j ( t-ω^ ( g )+2 ) ∪…∪ A_i,j ( t ). From Eq. <ref>, we can obtain diverse spatiotemporal information with different ω^ ( g ). Figure <ref> shows an example of transforming the original snapshot sequence into a merged snapshot sequence by the merged snapshot converter with ω=2. This example indicates that the merged snapshot converter can produce multi-timescale spatiotemporal inputs with different sizes of sliding windows. The experimental results presented in Section <ref> demonstrate that the merged snapshot converter with various sizes of sliding windows can improve the classification performances of the GDGESN on some DPC datasets. §.§ The multiple-reservoir encoder The multiple-reservoir encoder is proposed for extracting spatiotemporal features from merged snapshot sequences. We organize reservoir encoders following the layout described in Ref. <cit.>. Note that a reservoir encoder denoted by Θ_enc contains an input weight matrix 𝐖_in∈ℝ^N_R× N_U and a reservoir matrix 𝐖_res∈ℝ^N_R× N_R, where N_R is the size of the reservoir. We add a superscript ( g,l ) to Θ_enc for indicating the encoder located at the l-th layer of the g-th group, which can be formulated by Θ_enc^(g,l) = {𝐖_in^(g,l), 𝐖_res^(g,l)} for 1≤ g≤ N_G and 1≤ l≤ N_L, where N_G and N_L are maximal numbers of groups and layers, respectively. In the encoding process, the vertex embedding matrix at time t, 𝐗^ ( g,l ) (t )∈ℝ^N_R× N_V, can be calculated as follows: 𝐗^ ( g,l )_s (t ) = α f ( 𝐖_in^ ( g,l ) 𝐔^ ( g,l )_s ( t )+𝐖_res^ ( g,l ) 𝐗^ ( g,l )_s (t-1 ) 𝐀 ( t ) ) + ( 1-α ) 𝐗^ ( g,l )_s (t-1 ), where α∈ ( 0,1 ] is the leaking rate, f ( · ) is an activation function, and 𝐔^ ( g,l ) is the input matrix used for receiving the various vertex inputs, i.e. 𝐔^ ( g,l ) _s (t )=𝐯^ ( g ) _s (t ) for l= 1 𝐗^ ( g,l-1 ) _s (t ) for l>1 , the element values of 𝐖_in∈ℝ^N_R× N_V are randomly chosen from a uniform distribution with the range of [ -η,η ]. The element values of 𝐖_res^ ( g,l ) are randomly assigned from the uniform distribution [ -1, 1 ]. In order to ensure the echo state property <cit.> in each encoder, we keep ρ ( 𝐖^ ( g,l ) _res ) <1/ ρ ( 𝐀_s ( t ) ) at each time step. §.§ The linear classifier The recognition score of the s-th temporal graph, ŷ_s∈ℝ^N_Y, is calculated through a simple linear mapping, which can be formulated as follows: ŷ_s = 𝐖_out𝐜_s+𝐛, where 𝐜_s is the sum-pooled vector which can be calculated as follows: 𝐜_s= [ sp ( 𝐗_s^ ( 1,1 ) (N_T) ) ; sp ( 𝐗_s^ ( 1,2 ) (N_T) ) ;…; sp ( 𝐗_s^ ( N_G,N_L ) (N_T) ) ] ∈ℝ^N_RN_GN_L, where [ ·;· ] represents the vertical concatenation and sp ( · ) acts for the operation of summing N_V column vectors of 𝐗_s^ ( g,l ) ( N_T ) up. The readout matrix 𝐖_out∈ℝ^N_Y× N_RN_GN_L can be calculated as follows: 𝐖_out = 𝐘𝐂^T ( 𝐂𝐂^T+γ𝐈 )^-1, where 𝐂= [ 𝐜_1, 𝐜_2, …, 𝐜_N_S ] ∈ℝ^N_RN_GN_L× N_S is the collected matrix including N_S sum-pooled vectors, 𝐘= [ 𝐲_1,𝐲_2,…, 𝐲_N_S ] ∈ℝ^N_Y× N_S is the target matrix, and γ is the regularization parameter. The output for the s-th sample can be determined by the index of the maximum element in ŷ_s^ ( i ). § THE ANALYSIS OF THE COMPUTATIONAL COMPLEXITY We provide an analysis of the computational complexity of training the GDGESN in this section. Since the number of edges in each temporal graph is dynamic, we denote the number of edges for the s-th temporal graph at time t by E_s ( t ). We define that the sparsity of 𝐖_res in each reservoir is φ∈ ( 0, 1 ]. The computation in the merged snapshot converter costs 𝒪 ( ∑_s=1^N_S∑_t=1^N_TE_s ( t ) ). The computational complexity in each encoder is ∑_s=1^N_S∑_t=1^N_Tφ N_R^2E_s ( t ). The computational complexity of training the linear classifier is 𝒪 ( ( N_GN_LN_R ) ^2 ( N_S+N_GN_LN_R ) ). It is obvious that the computational complexity of the proposed model in the training phase is mainly determined by the relatively larger part between the cost of running the multiple-reservoir encoder and that of the training decoding module. Therefore, the total computational complexity can be summarized as follows: max ( 𝒪 ( N_GN_Lφ N_R^2∑_s=1^N_S∑_t=1^N_TE_s ( t ) ), 𝒪 ( ( N_G N_LN_R ) ^2 ( N_S+N_GN_LN_R ) ) ). In this study (see Section 5.2), N_G, N_L, and N_R are much smaller than N_S and ∑_t=1^N_TE_s ( t ). Therefore, the computational complexity of training the GDGESN can be reduced to 𝒪 (∑_s=1^N_S∑_t=1^N_T E_s ( t ) ), which is the same with the computational complexity of DynGESN and significantly lower than many kernel-based methods <cit.>. § EXPERIMENTS §.§ Descriptions of Datasets Six benchmark dissemination process classification datasets released in Ref. <cit.> were used to evaluate the performances of different models. We present their details in Table. <ref>. For these six datasets, The Susceptible-Infected (SI) epidemic model <cit.> is used to simulate spreading processes with the corresponding infection probabilities on temporal graphs. Note that there are two categories of infections with probabilities p_1 and p_2 in every dataset, and the spreading pattern corresponding to only one probability (p_1 or p_2) exists in each temporal graph for a dataset. The datasets attached with the suffix `_ct1' indicate that the infection probability of a spreading pattern is p_1=0.5 or p_2=0.5 in each temporal graph, and the others show that a spreading pattern with the infection probability p_1 = 0.2 or p_2=0.8 exists in each temporal graph. The goal of the experiment is to test whether a tested model can identify two spreading patterns accurately for each dataset. In this study, we filtered empty adjacency matrices from each temporal graph sequence. §.§ Tested models and experimental settings We leverage some kernel-based models in the experiments for comparison. These models can transform a temporal graph into a large-scale static graph and use kernel methods to generate final classification results. A transforming method, the directed line graph expansion (DL) <cit.>, was leveraged to combine with the random walk kernel (RW) <cit.> and the Weisfeiler–Lehman subtree kernel (WL) <cit.>. These two combinations are represented by DL-RW and DL-WL, respectively. Since the transformed static graph leads to significantly high computational complexity for these two models <cit.>, a simplified DL-RW method called approximate temporal graph kernel (APPR-𝒱) <cit.>, which can sample k-step random walks starting on only 𝒱 vertices of the transformed graph, was used as another tested model. In the experiments, 𝒱 was fixed at 250. Moreover, the prototype of GDGESN, dynamic graph echo state network (DynGESN) <cit.> is considered as a baseline model. Note that the three kernel-based models used supported vector machine <cit.> rather than the simple linear classifier leveraged by the DynGESN and the proposed GDGESN for generating classification results. For the proposed GDGESN, the parameter settings are listed in Table <ref>. We kept values of the spectral radius, the leaking rate, the input scaling, and the regularization factor the same as those of the DynGESN reported in Ref. <cit.>. We fixed the density of the reservoir connections and the reservoir size to be 1E-3 and 10, respectively. The number of layers and the number of groups were searched in the ranges of [ 1, 2, …, 4 ], and [ 1, 2, 3 ], respectively. We set the size of the sliding window at ω^ ( g ) = 2g-1 for g=1,2, … N_G. Note that the target of this study is to show the classification improvement in performances brought about by the merged snapshot strategy in the GDGESN. Therefore we did not consider searching the key parameters of encoders and the linear classifier for extreme performances. The computational environment is an Intel (R) Core i9-7900X CPU with 96GB RAM of DDR4 2666MHz. For the partition of datasets, each dataset was evenly separated into ten parts. We cyclically picked up nine of them for the training set and the rest for the testing set for cross-validation. Based on each partition, we randomly initialized the proposed model 20 times and reported the average performances. §.§ Evaluation metrics The accuracy rate is given by the following evaluations, which can be formulated as follows: Acc = The number of correct classified temporal graphs/The number of total temporal graphs× 100%. §.§ Experimental results To investigate the impacts brought about by the merged snapshot converter of the proposed model, we show the average classification performances of the proposed model with different combinations of N_L and N_G on six datasets in Fig. <ref>. We notice that the GDGESN with N_G>1 outperforms the GDGESN with N_G=1 when varying N_L from one to four on all the DPC datasets except highschool_ct2. Specifically, the classification performances of the GDGESN with N_G>1 obviously surpass those with N_G=1 when N_L=1. These results indicate that multi-timescale spatiotemporal inputs generated by the merged snapshot converter can significantly improve the classification performances of the GDGESN for the tested DPC datasets. The best average classification performances of the GDGESN and other tested models reported in <cit.> are listed in Table <ref>. We highlight the corresponding best performances obtained among the APPR-250, the DynGESN, and the GDGESN in bold since these three models have significantly lower computational costs than DL-RW and DL-WL. The best performance of the GDGESN for each dataset is obtained under the best combination of N_P and N_L shown in Fig. <ref>. We can see that the GDGESN outperforms the APPR-250 and the DynGESN on dblp_ct1, dblp_ct2, highschool_ct1, and tumblr_ct1. In particular, our model falls behind the DL-RW only on tumblr_ct1. In addition, the GDGESN only has a few inferiorities of classification performances in comparison with the DynGESN on highschool_ct2. Note that the dimension of each vertex embedding for the GDGESN is only 10, whereas that for the DynGESN is 16 <cit.>. By observing Fig. <ref> and Table <ref> jointly, we find that our model achieves the highest performances when N_L<4 on all the tested datasets except for highshcool_ct2 and tumblr_ct2, but the DynGESN obtains the best classification performances by setting N_L=4 on all datasets <cit.>. § DISCUSSION We have proposed a new RC-based model for dealing with DPC tasks in this study. The proposed model can transform the original dissemination process into various multi-timescale dissemination processes and then extract the corresponding spatiotemporal features through fixed group-wise reservoir encoders. These features are decoded into the final classification results by a simple linear classifier. The simulation results show that our proposed model outperforms the DynGESN and even several kernel-based models on some benchmark DPC datasets. In addition, the analysis of computational complexity shows that our model has the same cost as DynGESN in the training process. Based on the above-mentioned contents, we can conclude that the proposed GDGESN can hold relatively high effectiveness and efficiency in dealing with DPC tasks. It is obvious that the ultimate performances are far from being reached since we only used moderate values of key hyperparameters for multiple reservoirs in our model. We will continue exploring the optimal performances of the GDGESN on various DPC tasks in future. § ACKNOWLEDGEMENTS This work was partly supported by JST CREST Grant Number JPMJCR19K2, Japan (ZL, FK, GT) and JSPS KAKENHI Grant Numbers 23H03464 (GT), 20H00596 (KF), and Moonshot R&D Grant No. JPMJMS2021(KF). splncs04
http://arxiv.org/abs/2307.03286v1
20230706204817
Physics-Infused Machine Learning Based Prediction of VTOL Aerodynamics with Sparse Datasets
[ "Manaswin Oddiraju", "Divyang Amin", "Michael Piedmonte", "Souma Chowdhury" ]
cs.CE
[ "cs.CE" ]
Physics-Infused Machine Learning Based Prediction of VTOL Aerodynamics with Sparse Datasets Manaswin Oddiraju [Ph.D. Student, Department of Mechanical and Aerospace Engineering, University at Buffalo, AIAA Student member.], Divyang Amin [Flight Sciences Engineering Lead, Bechamo LLC], Michael Piedmonte[Chief Technical Officer, Bechamo LLC], Souma Chowdhury[Associate Professor, Department of Mechanical and Aerospace Engineering,University at Buffalo, AIAA Senior member. Corresponding author. Email: [email protected]] ==================================================================================================================================================================================================================================================================================================================================================================================================================================================== plain plain specialfooter Complex optimal design and control processes often require repeated evaluations of expensive objective functions and consist of large design spaces. Data-driven surrogate models such as neural networks and Gaussian processes provide an attractive alternative to expensive simulations and are utilized frequently to represent these objective functions in optimization. However, pure data-driven models, due to a lack of adherence to basic physics laws and constraints, are often poor at generalizing and extrapolating. This is particularly the case, when training occurs over sparse high-fidelity datasets. A class of Physics-infused machine learning (PIML) models integrate ML models with low-fidelity partial physics models to improve generalization performance while retaining computational efficiency. This paper presents two potential approaches for Physics infused modelling of aircraft aerodynamics which incorporate Artificial Neural Networks with a low-fidelity Vortex Lattice Method model with blown wing effects (BLOFI) to improve prediction performance while also keeping the computational cost tractable. This paper also develops an end-to-end auto differentiable open-source framework that enables efficient training of such hybrid models. These two PIML modelling approaches are then used to predict the aerodynamic coefficients of a 6 rotor eVTOL aircraft given its control parameters and flight conditions. The models are trained on a sparse high-fidelity dataset generated using a CHARM model. The trained models are then compared against the vanilla low-fidelity model and a standard pure data-driven ANN. Our results show that one of the proposed architecture outperforms all the other models and at a nominal increase in execution time. These results are promising and pave way to PIML frameworks which are able generalize over different aircraft and configurations thereby significantly reducing costs of design and control. § INTRODUCTION Real world optimal design and control problems consist of optimizations with large design spaces and computationally expensive objective functions and therefore require efficient and accurate models of complex physical systems. There are different proposed approaches to render these problems computationally feasible, such as the use of surrogate models, reduced order modelling methods and multi-fidelity optimization techniques. The use of data-driven surrogate models for optimization <cit.> is a popular approach to reduce the computational burden and render the process tractable. However, using pure data driven models poses additional challenges in the form of poor explainability and usually these models require large amounts of data either from expensive high-fidelity analysis or experiments in order to achieve sufficient accuracy. For many engineering design problems however fast running physics-based models are available, but they usually trade-off accuracy for performance. Therefore, in this paper, we propose two Physics Informed Machine Learning (PIML) models which combine data driven Artificial Neural Networks (ANNs) with low-fidelity physics models and augment the performance of the physics models while only needing a sparse high-fidelity dataset. We also develop an end-to-end auto-differentiable physics framework to enable efficient training of such hybrid architectures and apply them to model the aerodynamics of a 6 rotor eVTOL aircraft. The rest of this section briefly covers existing techniques for augmenting low-fidelity physics models using high-fidelity data, physics informed machine learning architectures before stating our research objectives. Conventional methods for enhancing low-fidelity computational models with high-fidelity or experimental data encompass techniques like Bayesian Inference <cit.>, Gradient-Based and Gradient-Free Optimization <cit.>, among others. The primary goal of these techniques is to determine the optimal parameters of the low-fidelity model in order to improve its overall performance. However, due to the complexity of the physical phenomena often modeled by these techniques, identifying a static set of parameters for a dataset might be inadequate and may not fully capitalize on the potential accuracy improvements offered by a sparse high-fidelity dataset. On the other hand, data-driven models, due to their near-instantaneous run times, are being utilized frequently for speculating the behavior of complex systems in domains such as mechanical systems <cit.>, robotics <cit.>, and energy forecasting <cit.>. Models such as Neural Networks, Gaussian Processes etc. are showing competitive prediction accuracy on some problems in this domain. However, they underperform at extrapolating and generalizing <cit.>, especially when trained with small or sparse data sets <cit.>. This can be attributed to a lack of adherence to basic laws of physics. Additionally, they also exhibit challenging-to-interpret black-box behavior and sensitivity to noisy data <cit.>. In lieu of these problems, significant research efforts are being directed towards "hybrid models" or "physics-infused" models. These models generally integrate data-driven and low-fidelity physics models in order to make predictions computationally feasible and accurate. There are different classes of physics-infused neural networks; although almost all of them make use of a data-driven model and a physics model, they differ in the architectures. Hybrid ML architectures can broadly be classified into serial<cit.> and parallel <cit.> architectures. Serial architectures typically have the data-driven models set in sequence with the partial physics model or used to tune the partial physics model parameters, while parallel architectures usually contain additive or multiplicative ensembles of partial physics and data-driven ML models <cit.>. Several such hybrid PIML architectures have been reported in the literature in the past few years <cit.>, spanning over a wide range of applications such as in modeling dynamic systems, cyber-physical systems, robotic systems, flow systems and materials behavior, among others. The OPTMA model <cit.>, is a physics-infused machine learning model which combines an artificial neural network with a partial physics model in order to make predictions. Earlier applications of this framework on acoustics problems have shown that it generalizes well and can even extrapolate successfully. This architecture primarily works by using the data-driven model -in this case the transfer network; to map the original inputs to the inputs of the partial physics model so as to make its output match that of the high-fidelity model. Hence, the training process is just the transfer network learning this mapping. And since the intermediate parameters are open and interpretable to domain experts (as they are inputs to partial physics models), the OPTMA model is less of a black box and more understandable when compared to pure data-driven models. In all of these sequential hybrid-ML models however, the presence of an external partial physics model increases the complexity and cost of the training process. Previously<cit.>, this hurdle was overcome by programming custom loss functions which include the partial physics in PyTorch<cit.> to enable backpropagation. However, this approach may not be feasible in all domains as PyTorch is not optimal for general purpose scientific computing (especially numerical methods). Therefore, in our goal to make OPTMA more user-friendly and computationally tractable, we use Google JAX <cit.> to auto-differentiate the partial physics model. JAX is capable of forward and backward mode auto-differentiation and works on codes written in python and is also easy to integrate with the transfer network written in PyTorch. This is a much more generalizable and scalable approach to creating computationally efficient PIML frameworks. During the process of aircraft design, and eventually the aircraft controller design, engineers often encounter the earlier stated predicament of selecting appropriate methods to model the aircraft. They may need to balance the benefits and drawbacks of using a method that is highly accurate, albeit costly and time-consuming, against a faster and more economical approach that might not provide the same level of precision.The complexity of this problem is magnified when engineers are tasked with designing more challenging-to-control aircraft than traditional fixed-wing varieties. A prime example is the Vertical Takeoff & Landing (VTOL) aircraft. As the aircraft designs become more complex, the modeling demands can escalate exponentially. In the case of VTOL, it is essential to model the aircraft dynamics across all phases: from hover to transition, and finally into cruise. Such aircraft also exhibit more intricate aerodynamics, including phenomena like blown-wing effects. There is typically a greater number of degrees of freedom, which further escalates the complexity of designing robust controllers for VTOL aircraft. Given the numerous benefits and strengths of physics-infused modeling, and the necessity for fast accurate models for aircraft design and control, it is evident that PIML models are particularly well-suited for modeling the aerodynamics of an aircraft. Therefore, in this paper, we propose two physics informed modelling approaches to augment a low-fidelity VLM model with a sparse high-fidelity dataset. One of the approaches, called PIML-A is an extension of the OPTMA architecture. The other architecture, named PIML-B is a comparatively simpler ensemble model that learns the error in the low-fidelity physics models. The architectures are aimed at improving the accuracy of the low-fidelity physics models while only using a sparse high-fidelity dataset to keep training costs low. Our research objectives are as follows: * Create a Vortex Lattice Method (VLM) model with auto-differentiation capabilities. * Develop different PIML architectures seeking to provide better accuracy / cost trade-offs than pure Low-Fidelity and pure data-driven models. * Apply the above architectures to model the control inputs to force and moment coefficients of a eVTOL aircraft. * Compare the performance and computational cost of the PIML models with the baseline low-fidelity and data-driven models. The remainder of this paper is laid out as follows: Section<ref> contains a description of the two PIML models framework and their components in detail. Section<ref> explains our case study and the models that we are using before moving on to Section<ref> which contains our results and discussion. Finally, we state our conclusions in Section<ref>. § PHYSICS INFUSED MACHINE LEARNING ARCHITECTURES Fig.<ref> shows the two proposed PIML frameworks as well as the vanilla low-fidelity physics model (ℒ). In this paper, all the models shown are used to predict the airframe force and moment coefficients for a given set of control and environmental parameters. The PIML models amalgamate low-fidelity physics models with Artificial Neural Networks (ANNs) to enhance predictions, yet they vary based on the specific role of the ANNs. As depicted in the framework diagram <ref>, the low-fidelity model comprises of a propeller solver and the Vortex Lattice Method (VLM) model. The prop solver computes induced velocities which are then fed into the VLM, which in turn calculates the force and moment coefficients on the airframe. The PIML-A model incorporates ANN layers to achieve three primary objectives: 1) To supersede the prop solver and directly estimate the axial and tangential induced velocities of the propellers, 2) to modify the flight conditions in order to enhance the predictions of the VLM model, and 3) to adjust the output of the VLM to reduce discrepancies with respect to the high-fidelity data. Given that all layers are trained concurrently, the PIML-A model learns the mapping from flight conditions and control inputs to the transfer parameters. Simultaneously, it learns the output errors of the VLM and adjusts them to align more closely with the high-fidelity data. The reasoning behind this model is that part of the errors in the VLM model can be mitigated by shifting the inputs into different domains, and the errors that remain, not resolved by this input modification, will be addressed by the correction layers on the output side. In contrast, the PIML-B model does not adjust any inputs to the original low-fidelity model and retains the prop solver in its structure. Within this design, the sole objective is to identify the error in the induced velocities computed by the prop solver and rectify these parameters. This model is more closely aligned with the low-fidelity physics model and aims to enhance modeling accuracy by refining the outputs of the prop solver. The following subsection focuses on the training processes of these PIML models, specifically the computation of gradients and the implementation of backpropagation through the low-fidelity physics models. §.§ PIML Model Training The training of neural networks (i.e optimizing the weights of the neural network to minimize loss) is performed using gradient descent and therefore requires the gradient of the loss function(L) w.r.t to the weights(W). The following equations show the integration of backpropagation gradients through an auto-differentiable physics model. This feature enables the training of our PIML architectures using standard ANN training techniques and using standard ANN software libraries. L= Y_i-Y_i^'^2 (MSE Loss) L= 1/n∑_i=1^n(Y_i-Y_i^')^2 ∴ ∂ L/∂ W= 1/n∑_i=0^n 2(Y_i^'-Y_i)∂ Y_i^'/∂ W Where Y_i_Ground Truth = ϕ(X_i), Y_i^'_Prediction = (X_i,W) represents the sequential PIML model and ϕ represents the high-fidelity physics model.The terms X_i, Y_i, Y_i^' represent the input, high-fidelity output and the OPTMA prediction of the i^th training sample respectively. From the definition of Y_i ^ ' : Y_i^'= (X_i, W) ∂ Y_i^'/∂ W= ∂ G(X_i, W)/∂ W As is the OPTMA model, we have: (X_i, W) =Φ((X_i, W)) ∂(X_i, W)/∂ W =∂Φ((X_i, W))/∂ W =∂Φ(U)/∂ U_Partial Physics Auto-Differentiation×∂(X_i, W)/∂ W_Backpropagation Here, and Φ represent the neural network and the partial physics models respectively. U represents the intermediate parameter vector which is the output of the transfer network and input to the partial physics model. § CASE STUDY: MODELLING AERODYNAMICS OF A TILT-ROTOR EVTOL AIRCRAFT The Aerobeacon, shown in fig.<ref>, is a 6 rotor aircraft with 4 hover rotors in a quad configuration and two tilt-rotors at the edges of the wings. The aircraft has no rudder and instead makes use of differential thrust between the tilt rotors for yaw control. The only movable control surface on the aircraft apart from the tiltrotor is an elevator. Since this is our initial attempt at modelling this aircraft, we chose to fix the hover rotor RPMs at 5000 and only vary the other control inputs listed in Tale.<ref>. 0.39 tableAircraft Control Parameter Bounds Parameters Bounds Speed (v) [0 m/s ,45 m/s] Angle of Attack (α) [-15^∘ , 15^∘ ] Starboard Propeller RPM (ω_star) [4000 , 10000] Port Propeller RPM (ω_port) [4000 , 10000] Starboard Propeller Angle (θ_star) [0^∘ , 110^∘ ] Port Propeller Angle (θ_port) [0^∘ , 110^∘ ] Elevator Deflection (θ_elev) [-15^∘ , 15^∘ ] 0.55 < g r a p h i c s > figureAerobeacon aircraft design configuration from Flighthouse Engineering §.§ Modelling Aircraft Flight Dynamics Table <ref> displays the inputs and outputs of all the models and also the transfer parameters corresponding to the PIML-A model. The transfer parameters are specifically selected such that they have an impact on the output accuracy of the model, as well as to allow greater flexibility to the embedded ANN layers to affect the final outputs. In both the proposed architectures, we try to improve the low-fidelity model by either augmenting or replacing the prop solver. The PIML-A model in particular also tries to alter the outputs of the VLM model to improve accuracy. This model is an extension of the OPTMA architecture discussed in the Introduction. §.§ Low Fidelity Model The lower fidelity flight dynamics model uses Bechamo LLC's developed VLM plus propeller effects low-fidelity tool, BLOFI. This tool was developed initially to model the NASA Tiltwing aircraft and was validated against the X-57 data <cit.>. The validation plots can be seen in fig.<ref>. This was done in order to prove that we are able to capture the blown wing effects. BLOFI was created by using an improvised VLM that accounts for some blown wing effects caused by the propellers upstream of the the lifting surfaces. A base VLM <cit.> is implemented and is modified by accounting for induced velocities caused by propellers' wash upstream of the control points on the lifting surfaces. The properties for each propeller are calculated using Blade Element Momentum Theory <cit.>. It should also be noted that propeller slipstream contraction <cit.> is accounted for when including the induced velocities from the propellers in the VLM setup. A set of flight conditions is created by making a grid of every combination of freestream velocity, sideslip and angle of attack. At each of these flight conditions the aerodynamic forces and moments, including the propeller thrusts and torques, are calculated, using the aforementioned VLM implementation. This is done for every expected RPM setting, propeller tilt angle and tail deflection. The combination of aerodynamic forces and moments and propeller thrusts and torques is used to determine the overall forces and moments acting on the aircraft. CHARM<cit.> is a commercial tool developed by Continuum Dynamics, Inc. for simulating the aerodynamics and dynamics of rotorcraft using partical methods. Flighthouse Engineering LLC. provided their Aerobeacon aircraft design, parameters, and constructed the CHARM model. They ran CHARM test cases for a collection of randomly generated input samples. The data set derived from the CHARM model serves as the high-fidelity data for the research presented in this paper. The VLM configuration is shown in fig.<ref> and the CHARM higher fidelity output visualization is shown in fig.<ref>. §.§ Sparse High-Fidelity Dataset In order to generate the high fidelity dataset, we used Latin Hypercube Sampling and created a dataset of 100 points across the parameters and bounds listed in Tab.<ref>. These 100 samples were then input to the CHARM high fidelity model and the dataset was generated. Out of the 100 initial samples, only 87 were usable as 11 cases failed to converge on the CHARM model and the remaining 2 cases showcased abnormally high values of the force and moment coefficients and were removed from the dataset. While generating the dataset, we fixed the RPMs of the hover quad rotors to 5000 RPM. This final dataset of 87 samples was then randomly divided into a training set containing 70 samples and a validation set containing 17 samples. All the performance results shown in this paper are run on this validation dataset consisting of samples completely unseen by the model. §.§ Prior Analysis of BLOFI Performance on Other Data-Sets BLOFI was developed at Bechamo LLC and in order to prove its usefulness in comparison to other available VLM solvers that do not account for blown wing effects. It has been previously validated using CFD data<cit.> generated by NASA for the X-57. Just as a flat plate model was created for the Aerobeacon with the appropriate propellers for the developments of this paper, a flat plate model with the corresponding propellers was earlier created for the X-57 as part of the validation study on BLOFI. In figure<ref> we are able to see that BLOFI can capture significant blown-wing effects, by comparing the power-on and power-off plots. The increase in C_L and C_D at every angle of attack can be observed. More importantly, the inclusion of induced velocities in the VLM solve also lets us observe the increased flap effectiveness. Errors are between 5% and 10% for the range of angles of attack for which this validation study was conducted. It is expected that with a PIML approach, not only can we close down this gap (error), but more readily extend the low/medium fidelity solver such as BLOFI to work on a wider variety of aircraft configurations with little to no manual tuning and manual model development. To support his premise, the next section presents the results of our PIML architectures with the example of Aerobeacon configuration, where we had more control over the high-fidelity sampling – enabling ease of PIML training for this paper. § RESULTS AND DISCUSSION §.§ Baselines For Comparison For testing the performance of the two proposed PIML models, we compared them against the vanilla low-fidelity model(ℒ) shown in fig.<ref> and a pure data-driven ANN (𝒜). Table <ref> lists the parameters of the stand-alone ANN and the ANNs used as part of the PIML frameworks. All the data-driven and PIML models were trained on the same training dataset of 70 high-fidelity samples and validated on a validation dataset consisting of 17 samples. As the training data was sparse, a pure ANN with similar size of the ANNs used for the PIML models tended to overfit very quickly. Therefore, the size of the pure data-driven ANN is lower as compared to the ANNs used in the PIML models. §.§ Modelling Performance Figure.<ref> shows the convergence history for both the PIML models. From the figure we see that PIML-A has a much smoother training history as compared to the PIML-B model. The PIML-B model seems to be sensitive to corrections in induced velocity. Further training with a lower learning rate or decaying the learning rate may resolve this issue and improve performance of the PIML-B model. It may also be the case that the VLM is very sensitive to the induced velocity inputs or that there is an unknown bias in the validation dataset. This could've happened due to the random splitting of data into train and test samples. Further work with varying test and train datasets similar to K-fold cross-validation are necessary to understand if there's any data bias. Figure.<ref> shows the prediction error of the models in all of the 4 outputs. From the figure, we see that the PIML-A model performs the best. The pure data-driven ANN and the PIML-B model exhibit similar performance. Relative to the amount of data available, the ANN performs very well and can prove to be a possible alternative in cases with a larger sample set and no additional requirements on interpretability. he fact that the high-fidelity samples were generated using Latin hypercube sampling, ensuring an even domain coverage, could also be a contributing factor to the ANN's strong performance. Among all the modeled outputs, PIML-A appears to excel in predicting C_D. This could be attributed to the transfer parameters having a greater impact on C_D as compared to the other outputs. Where things begin to get interesting is when we see the effect of the learnt output correction layers in PIML-A as shown in fig.<ref>. From the figure, it is easily seen that the output layers of the VLM model have a very strong negative correction on the C_L that the VLM outputs. As we also see from fig.<ref>, we see that the VLM model has the highest prediction error in C_L. Combined, these two plots convey that the transfer parameters do not influence C_L as much as they do the other outputs or that just shifting the inputs is not sufficient to improve the C_L predictions of the VLM model. The correction for other outputs predicted by the PIML model is evenly distributed between positive and negative and this seems to be reasonable given the performance of the vanilla low-fidelity models on all these outputs is about the same. The PIML-B model on the other hand performs on par with the low-fidelity VLM model. This outcome might be explained by an earlier hypothesis suggesting that if the induced velocities indeed have a minor impact on C_L, then regardless of what the correction network learns, it won't be able to enhance the predictions for C_L. However, since the training loss is calculated across all outputs, the overall loss remains substantial, causing the network to struggle in learning the transfer mapping necessary to improve predictions on the other inputs. It may be the case that when different models are trained to learn the various outputs of this modelling problem we might see improved performance by the PIML-B architecture on outputs other than C_L, but as that is not feasible especially once the number of outputs gets large enough to support design / control frameworks, PIML-A model might be better suited for this problem. § CONCLUSION The paper presents two distinct architectures of physics-informed models, referred to as PIML-A and PIML-B. These models integrate Artificial Neural Networks with a low-fidelity BLOFI model with the aim of enhancing the prediction performance of the aerodynamic coefficients of a six-rotor eVTOL aircraft. To facilitate the training of both models, an end-to-end auto-differentiable physics framework was established using Google JAX. The models were then trained using a sparse dataset derived from a high-fidelity CHARM model. When comparing prediction performance, it was found that the PIML-A model achieved a higher degree of accuracy in comparison to the PIML-B model, the basic low-fidelity model, and a baseline purely data-driven model. A comprehensive analysis of the results demonstrated that the PIML-A model's method of simultaneous input-shifting and output-correction yielded superior results, given that one of the inputs might not be highly sensitive to the chosen transfer parameters. To conclusively determine the optimal architecture for this modeling problem, additional testing using cross-validation and hyperparameter optimization is needed. However, the initial results indicate that the PIML-A model holds significant promise. Looking forward, the BLOFI low-fidelity model, when augmented with PIML, could potentially be employed to generalize across diverse aircraft designs and configurations, thereby proving to be a valuable tool in the domains of aircraft design and control. § ACKNOWLEDGEMENTS This material is based upon work funded by Bechamo LLC's (with sub-contract to University at Buffalo) NASA Phase II Small Business Innovation Research (SBIR) Award No. 80NSSC22CA046. The authors would also like to thank Flighthouse Engineering LLC for providing the aircraft design and the high-fidelity data used in this paper. § BLOFI VALIDATION
http://arxiv.org/abs/2307.02525v1
20230705180000
Emergent Global Symmetry from IR N-ality
[ "Anindya Dey" ]
hep-th
[ "hep-th", "math-ph", "math.MP" ]
figureright decorations.text,intersections, pgfplots.fillbetween math snakes,3d,shapes.geometric,shadows.blur positioning automata arrows calc decorations.markings decorations.pathreplacing intersections positioning topaths shapes.geometric shapes.misc cf-group/.style = shape = rounded rectangle, minimum size=1.0cm, rotate=90, rounded rectangle right arc = none, draw cross/.style=path picture= [black] (path picture bounding box.south east) – (path picture bounding box.north west) (path picture bounding box.south west) – (path picture bounding box.north east); unode/.style= black, circle,draw,thick,fill=black!100 ,minimum size=1mm sunode/.style= black, circle,draw,thick,fill=yellow!100 ,minimum size=1mm fnode/.style= black, rectangle,draw,thick,minimum size=1mm afnode/.style= blue,rectangle,draw,thick,minimum size=1mm figurec
http://arxiv.org/abs/2307.01838v1
20230704173019
EdgeFace: Efficient Face Recognition Model for Edge Devices
[ "Anjith George", "Christophe Ecabert", "Hatef Otroshi Shahreza", "Ketan Kotwal", "Sebastien Marcel" ]
cs.CV
[ "cs.CV", "cs.CR" ]
: Efficient Face Recognition Model for Edge Devices Anjith George, Christophe Ecabert, Hatef Otroshi Shahreza, Ketan Kotwal, Sébastien Marcel Idiap Research Institute Rue Marconi 19, CH - 1920, Martigny, Switzerland {anjith.george, chrishophe.ecabert, hatef.otroshi, ketan.kotwal, sebastien.marcel}@idiap.ch August 1, 2023 ============================================================================================================================================================================================================================================================================ In this paper, we present , a lightweight and efficient face recognition network inspired by the hybrid architecture of EdgeNeXt. By effectively combining the strengths of both CNN and Transformer models, and a low rank linear layer, achieves excellent face recognition performance optimized for edge devices. The proposed network not only maintains low computational costs and compact storage, but also achieves high face recognition accuracy, making it suitable for deployment on edge devices. Extensive experiments on challenging benchmark face datasets demonstrate the effectiveness and efficiency of in comparison to state-of-the-art lightweight models and deep face recognition models. Our model with 1.77M parameters achieves state of the art results on LFW (99.73%), IJB-B (92.67%), and IJB-C (94.85%), outperforming other efficient models with larger computational complexities. The code to replicate the experiments will be made available publicly [<available-upon-acceptance>]. § INTRODUCTION Face recognition has become an increasingly active research field, achieving significant recognition accuracy by leveraging breakthroughs in various computer vision tasks through the development of deep neural networks <cit.> and margin-based loss functions  <cit.>. In spite of remarkable improvements in recognition accuracy, state-of-the-art face recognition models typically involve a deep neural network with a high number of parameters (which requires a large memory) and considerable computational complexity. Considering memory and computational requirements, it is challenging to deploy state-of-the-art face recognition models on resource-constrained devices, such as mobile platforms, robots, embedded systems, etc. To address the issue of memory and computational complexity of state-of-the-art deep neural networks, researchers have been focusing on designing lightweight and efficient neural networks for computer vision tasks that can achieve a better trade-off between recognition accuracy, on one side, and required memory and computational resources, on the other side <cit.>. Recently, some works have attempted to utilize lightweight convolutional neural network (CNN) architectures, such as MobileNets <cit.>, ShuffleNet <cit.>, VarGNet <cit.>, and MixNets <cit.>, for face recognition tasks <cit.>, reducing model parameters as well as computational complexity and meanwhile maintaining high levels of accuracy. However, with the recent emergence of vision transformers (ViTs) <cit.> and their ability in modeling global interactions between pixels, there is an opportunity to further improve the efficiency and performance of face recognition models by leveraging both CNNs and ViTs capabilities. In this paper, we present , a novel lightweight face recognition model inspired by the hybrid architecture of EdgeNeXt <cit.>. We adapt the EdgeNeXt architecture for face recognition and also introduce a Low Rank Linear (LoRaLin) module to further reduce the computation in linear layers while providing a minimal compromise to the performance of the network. LoRaLin replaces a high-rank matrix in a fully connected layer with two lower-rank matrices, and therefore reduces the number of parameters and required number of multiply adds (MAdds). effectively combines the advantages of both CNNs and ViTs, utilizing a split depth-wise transpose attention (STDA) encoder to process input tensors and encode multi-scale facial features, while maintaining low computational costs and compact storage requirements. Through extensive experimentation on challenging benchmark face datasets, including LFW, CA-LFW, CP-LFW, CFP-FP, AgeDB-30, IJB-B, and IJB-C, we demonstrate the effectiveness and efficiency of in comparison to state-of-the-art lightweight models and deep face recognition models, showing its potential for deployment on resource-constrained edge devices. The main contributions of our work can be summarized as follows: * We propose an efficient lightweight face recognition network, called , based on a hybrid network architecture that leverages CNN and ViT capabilities. We adapt the hybrid network architecture of EdgeNeXt for the face recognition task. To the best of our knowledge, this is the first work that uses a hybrid CNN-transformer for efficient face recognition. * We introduce Low Rank Linear (LoRaLin) module to further reduce the computation in linear layers while providing a minimal compromise to the performance of the network. LoRaLin module replaces a high-rank matrix in a fully connected layer with two lower-rank matrices, and therefore reduces the number of parameters and required computations. * We provide extensive experimental results on various challenging face recognition datasets, demonstrating the superior performance of in comparison to existing lightweight models. Our experiments also highlight the model's robustness under different conditions, such as pose variations, illumination changes, and occlusions. The remainder of this paper is organized as follows. Section <ref> provides a brief overview of related works, discussing the limitations of existing lightweight face recognition models and the potential advantages of hybrid architectures. Section <ref> presents a detailed description of the proposed model and the overall hybrid architecture. Section <ref> outlines the experimental setup, datasets, and evaluation metrics used to assess the performance of , followed by a comprehensive analysis of the results in Section <ref>. Finally, Section <ref> concludes the paper and outlines potential future directions for this research. § RELATED WORK Over the past decade, face recognition has been regarded as one of the most prominent and widely deployed applications of deep learning. However, as the handheld mobile devices and edge computing became prevalent, the researchers directed efforts towards developing lightweight face recognition models without compromising their accuracy. During initial phase of developing efficient models, Wu et al. proposed LightCNN <cit.>, a light-weight architecture with three different configurations: 4-layer, 9-layer, and 29-layer- as the number of parameters varied from 4M to 12M. The 29-layer configuration of LightCNN achieved 99.33% accuracy on the (unrestricted protocol of) LFW dataset <cit.>. With the introduction of MobileNets <cit.>, the use of depth-wise separable convolutions became a major factor for further improving the aspects of model parameters and FLOPs. MobileFaceNets are a family of efficient CNN models, based on MobileNet architecture, designed for real-time face verification tasks <cit.>. It achieved 99.55% accuracy on LFW while using less than 1M parameters. The Efficient Lightweight Attention Networks (ELANet) consist of inverted residual blocks (similar to MobileNetV2), and additionally, employ concurrent channel- and spatial-level attention mechanisms <cit.>. The ELANets have nearly 1M parameters, and achieve state-of-the-art performance across multiple datasets. The vanilla depth-wise convolutions are extended by incorporating multiple kernel sizes in a single convolution <cit.>. This concept, known as MixConv, was used to develop MixFaceNet networks for lightweight face recognition <cit.>. The XS configuration of MixFaceNet has been reported to exhibit high recognition performance with as low as 1M parameters. The ShiftNet, proposed in <cit.>, is a family of CNNs with a “shift” block that is a FLOP-free alternative to expensive convolution operation. Their face recognition model with 0.78M parameters, called ShiftFaceNet, achieves a comparable performance to that of FaceNet in terms of recognition accuracy. Duong et al. considered faster downsampling of spatial data/ feature maps and bottleneck residual blocks towards developing lightweight face recognition models <cit.>. Their MobiFace and Flipped-MobiFace models provide more than 99.70% accurate results on LFW dataset. Inspired from the ShuffleNetV2 <cit.>, the family of lightweight models, referred to as ShuffleFaceNet, for face recognition was proposed in <cit.>. The number of parameters in these models vary from 0.5M to 4.5M while verification accuracies of higher than 99.20% have been reported for LFW dataset. Another family of lightweight architectures, ConvFaceNeXt <cit.> uses enhanced version of ConvNeXt blocks and different downsampling strategies to reduce the number of parameters as well as FLOPs. With about 1M parameters and nearly 400M FLOPs, ConvFaceNeXt networks achieve a comparable performance in face recognition. In <cit.>, neural architecture search (NAS) was used to automatically design an efficient network – PocketNet, for face recognition. The PocketNet architecture was learnt using differential architecture search (DARTS) algorithm on CASIA-WebFace dataset <cit.>. The training of this network also comprises a multi-step knowledge distillation (KD). Another approach involving KD for training a face recognition network was employed in <cit.>. Their model uses variable group convolutions to handle the unbalance of computational intensity. The corresponding model, called VarGFaceNet, was the winner of Lightweight Face Recognition (LFR) challenge at ICCV 2019 <cit.>. Recently, Alansari et al. proposed GhostFaceNets (multiple configurations) that exploit redundancy in convolutional layers to create compact networks <cit.>. In these modules, a certain fixed percentage of the convolutional feature maps are generated using depthwise convolutions that are computationally inexpensive. With configurable hyperparameters, GhostFaceNets can be designed to contain as low as 61M FLOPs with nominal reduction in their recognition performance. Vision Transformer (ViT) architectures <cit.> have achieved excellent results for various recognition tasks, but their high computational costs have restricted the usage of vision transformers in a low resource environment. To address this shortcoming, Chen et al. combined local processing in CNNs (such as MobileNet) and global interaction in transformers to design a new architecture, Mobile-Former <cit.>. Mehta and Rastegari introduced MobileViT architecture, based on local-global image context fusion, to build a lightweight and low latency network for general vision tasks <cit.>. While both aforementioned architectures attempt to leverage benefits of CNNs and transformers for vision-classification tasks, the computational complexity of their MHA (multi-head attention) blocks still remains a bottleneck for the inference time on edge devices. § PROPOSED ARCHITECTURE In this section, we describe the detailed architecture of the FR model. While most of the works on efficient face recognition networks focus on variants of CNNs, they have two primary constraints due to their convolution operations. Firstly, they possess a local receptive field, making it challenging for them to represent global context. Secondly, the weights learned by CNNs remain static during inference, limiting their adaptability to different input content. Transformers and CNN-Transformer hybrids attempt to address these limitations, despite their higher computational cost. The is inspired from the CNN-Transformer hybrid architecture of the EdgeNeXt model introduced in <cit.>. We adapt this model to make it suitable for the face recognition task, by adding a representation head for embeddings, and introducing low rank linear layers for reducing the parameters and FLOPs. First, we detail the architecture of the EdgeNeXt model designed for image classification, followed by our new additions to make it an efficient face recognition network. §.§ EdgeNeXt Architecture The EdgeNeXt Architecture <cit.> is a lightweight hybrid design that combines the merits of Transformers <cit.> and Convolutional Neural Networks (CNNs) for low-powered edge devices. EdgeNeXt models with a smaller number of parameters, model size and multiply-adds (MAdds) and outperforms models such as MobileViT <cit.> and EdgeFormer <cit.> in image recognition performance. The EdgeNeXt model builds on ConvNeXt <cit.> and introduces a new component known as the Split Depth-wise Transpose Attention (STDA) encoder. This encoder works by dividing input tensors into several channel groups. It then uses depth-wise convolution in conjunction with self-attention mechanisms across the channel dimensions. By doing so, the STDA encoder naturally enlarges the receptive field and effectively encodes features at multiple scales. The extensive requirements of the transformer self-attention layer make it impractical for vision tasks on edge devices, primarily due to its high MAdds and latency. To address this issue in SDTA encoder, they utilize transposed query and key attention feature maps <cit.> . This approach enables linear complexity by performing the dot-product operation of the Multi-Head Self-Attention (MSA) across channel dimensions, instead of spatial dimensions. As a result, cross-covariance across channels can be computed and create attention feature maps that inherently contain global representations. They also introduce adaptive kernal sizes to capture more global information by using smaller kernal sizes in the initial layers followed by larger kernals for the latter stages in the convolutional encoder stages. These models come in various sizes, offering flexibility based on specific requirements. They include the extra-extra small, extra-small, and small variants. More details about the architecture can be found in <cit.>. §.§.§ Low Rank Linear Module (LoRaLin) Despite the considerable optimization offered by the EdgeNeXt architecture, it is observed that a significant portion of both computational and parameter overhead originates from the linear layers. In an attempt to attenuate these parameter demands, we propose the incorporation of a Low Rank Linear Module (LoRaLin). This module effectively reduces computational requirements while maintaining minimal compromise to overall performance. Hu et al. <cit.> proposed an approach termed Low-Rank Adaptation (LoRA) for reducing the number of trainable parameters in large language models during fine-tuning. The LoRA tuning method maintains the weights of the pretrained model unchanged while introducing trainable rank decomposition matrices into every layer of the Transformer architecture. This technique draws inspiration from the concept of `low intrinsic dimension` observed when adapting a pretrained model to a specific task <cit.>. The amount of newly introduced parameters are considerably less, even though the original full rank matrices needs to be used at inference time. However, our aim is to reduce the parameter count of the model while accepting a trade-off in terms of model capacity. To accomplish this, we adopt a strategy of factorizing each fully connected layer into two low rank matrices. Consider a fully connected layer in the network: Y = W_M × NX + b The weight matrix W in a linear layer of a neural network, which maps an input of size M to an output of size N, has dimensions M × N. This matrix can be represented as the product of two low rank matrices as follows: W_M × N = W_M × r . W_r × N Where, W_M × r and W_r × N, are low rank matrices with a rank r. Now, the original linear layer can be implemented as : Y = W_r × N( W_M × r (X))+b Essentially as two linear layers, with lower ranks, this essentially reduces the number of parameters, and the number of multiply adds (MAdds). This can be implemented using two linear layers instead of one as shown in Fig. <ref>. In this context, the rank of each module is determined by a hyper parameter known as Rank-ratio (γ), which governs the ratio between the ranks. A minimum value of two is employed as the lower limit for the rank in our implementation. rank= max(2, γ * min(M, N)) By varying the value of γ, both the number of parameters and FLOPS undergo changes. For instance, in the case of the “edgenext-extra-small (XS)” network, the Figure <ref> illustrates the reduction in the number of parameters and FLOPS with lower values of γ. The dotted line represents the values associated with the original linear layer. Notably, for γ≤ 0.8, both parameter count and computational efficiency demonstrate improvements compared to the base model. §.§.§ Face Recognition Model The primary focus of this work is to design an efficient network tailored for face recognition on edge devices. Towards this goal, we extend the EdgeNeXt <cit.> architecture for face recognition. First, we try to reduce the parameters and FLOPs of the model further by replacing the Linear layers in the EdgeNeXt network with the newly introduced low rank LoRaLin layers. In addition, we add a classification head composed of Adaptive Average Pooling and layer norms, followed by a LoRaLin layer outputting a 512-dimensional representation. The input resolution required for the model is adjusted to be 112 × 112. To optimally train this adapted model for face recognition, we employ end-to-end training in conjunction with a CosFace <cit.> classification head. Figure <ref> provides a schematic representation of the updated face recognition model. §.§ Training details The dataset used for training the FR models constitutes selected subsets of the Webface260M dataset <cit.>, specifically, the WebFace 12M and WebFace 4M subsets. These subsets are characterized by an abundance of pre-aligned face images, each with a resolution of 112 × 112. The initial preprocessing step entails the conversion of these images into tensors, followed by normalization within the -1 to 1 range. We further enhance the data variability through a series of augmentations, including random grayscale conversion, resizing, and blurring. These augmentations are implemented leveraging the capabilities of the DALI <cit.> library. The models were trained with 4/8 Nvidia RTX 3090 (24GB) GPUs using distributed training strategy. We trained our models using PyTorch with AdamW optimizer <cit.> and trained the models with CosFace <cit.> loss function using a polynomial decay learning rate schedule with restarts to achieve the best performance. The batchsize on a single GPU varied from 256 to 512 depending on the size of the model. The embedding size during training is kept as 512. We used the distributed PartialFC algorithm <cit.> for faster training and to handle memory issues while dealing with a large number of identities. During inference, the classification head is removed and the resulting 512-D embedding is used for the comparisons. The training settings and hyper parameters for different models were selected for optimal performance. § EXPERIMENTS §.§ Databases We evaluated the proposed model using seven distinct benchmarking datasets. The datasets selected for assessment include Labeled Faces in the Wild (LFW) <cit.>, Cross-age LFW (CA-LFW) <cit.>, CrossPose LFW (CP-LFW) <cit.>, Celebrities in Frontal-Profile in the Wild (CFP-FP) <cit.>, AgeDB-30 <cit.>, IARPA Janus Benchmark-B (IJB-B) <cit.>, and IARPA Janus Benchmark-C (IJB-C) <cit.>. To maintain consistency with prior wprls, we report accuracy values for high-resolution datasets such as LFW, CA-LFW, CP-LFW, CFP-FP, and AgeDB-30. For the IJB-B and IJB-C datasets, we report the True Accept Rate (TAR) at a False Accept Rate (FAR) of 1e-4. §.§ Comparison with SOTA Table <ref> compares our method with SOTA lightweight face recognition models in the literature on different benchmarking datasets. We categorized models in the literature based on the number of parameters into 2-5 M parameters and <2M parameters. In each category, we also have a representative version of . In the category of 2-5 M parameters models, our representative model is EdgeFace-S (γ = 0.5), and in the second category (<2M parameters) we can consider EdgeFace-XS (γ = 0.6) as our representative model. As the results in this table show our models achieve competitive performance with SOTA lightweight models in the literature. For CA-LFW, CP-LFW, IJB-B, and IJB-C datasets, our EdgeFace-S (γ = 0.5) model achieves the best recognition accuracy compared to SOTA models 2-5 M parameters. It is noteworthy our EdgeFace-S (γ = 0.5) model is also the most efficient model in terms of FLOPs among the SOTA lightweight models with 2-5 M parameters. For the second category, our EdgeFace-XS (γ = 0.6) model achieves the best recognition performance for LFW, CP-LFW, IJB-B, and IJB-C datasets. Compared to other models in the same category, our model is the second most efficient model in terms of FLOPs. In this category, we observe that ShuffleFaceNet 0.5x has fewer FLOPs, but it also has the poorest recognition performance in all datasets. The superior performance of our models in terms of FLOPS to performance can be observed in Fig. <ref>, Fig. <ref>, and Fig. <ref>. §.§ Ablation study In order to evaluate the effectiveness of the LoRaLin layers, we conducted a series of experiments using the EdgeFace-XS model. These experiments involved varying the value of γ from 0.2 to 1, with increments of 0.2. All models were trained using the same configuration for 50 epochs. As a point of reference, we also compared these models with the default EdgeFace-XS model, which does not include the LoRaLin layer. Figure <ref> illustrates the changes in model parameters and FLOPs as the value of γ varies. It is observed that the parameters and FLOPs remain consistent with the EdgeFace-XS model when γ is approximately 0.8. For values of γ below 0.8, there is a reduction in model parameters, FLOPs, and size. To assess the performance of these models, we benchmarked them using standard benchmarks. The results are presented in Table <ref>, which displays the performance across these benchmarks. The performance deteriorates as the value of γ decreases (Fig. <ref>). However, it is notable that the performance remains satisfactory up to γ = 0.6, beyond which it starts to decline more sharply. Figure <ref> demonstrates the performance changes of the models on the IJB-B and IJB-C datasets. In both cases, the proposed method achieves good performance up to γ = 0.6. Additionally, Table <ref> provides the percentage points of performance degradation corresponding to the changes in model parameters and FLOPs for the IJB-C and IJB-B datasets. It can be seen that we can obtain around 20% savings in parameters and FLOPS with less than 0.5% drop in accuracy. The results presented in Table <ref> highlight that our approach achieves a significant improvement in parameter and FLOP efficiency while maintaining a minimal reduction in performance. This demonstrates the effectiveness of our approach in achieving a favorable trade-off between efficiency and performance. § DISCUSSIONS Our experiments in Section <ref> show that our model is very efficient and also achieves competitive recognition accuracy compared to SOTA lightweight models. Among seven benchmarking datasets used in our evaluation, achieves the best recognition performance for four different datasets in each of the categories of models with 2-5 M parameters and <2M parameters. Achieving such a high recognition accuracy is more particularly impressive considering the computation of different models in terms of FLOPs in Table <ref>, where we observe that is the most efficient model in the first category (2-5 M parameters) and the second most efficient model in the second category (<2M parameters). Among our different benchmarking datasets, five datasets (i.e., LFW, CA-LFW, CP-LFW, CFP-LFW, and AgeDB-30) have higher-quality face images. The results in Table <ref> show that our model achieves competitive performance with SOTA models on these benchmarking datasets. In contrast, IARPA Janus Benchmark datasets (i.e., IJB-B and IJB-C) include images with different qualities (including low-quality images) and are among the most challenging face recognition benchmarking datasets. According to the results in Table <ref>, outperforms all previous lightweight models in both categories of models with 2-5 M parameters and <2M parameters on these two datasets, which shows the superiority of our model for different quality of images. § CONCLUSIONS In this paper, we introduced , a highly efficient face recognition model that combines the strengths of CNN and Transformers. By leveraging efficient hybrid architecture and LoRaLin layers, the model achieves remarkable performance while maintaining low computational complexity. Our extensive experimental evaluations on various face recognition benchmarks, including LFW, AgeDB-30, CFP-FP, IJB-B, and IJB-C, demonstrate the effectiveness of . Our hybrid design strategy incorporates convolution and efficient self-attention-based encoders, providing an ideal balance between local and global information processing. This enables to achieve superior performance compared to state-of-the-art methods while maintaining low parameters and MAdds. In summary, offers an efficient and highly accurate face recognition model tailored for edge devices. Knowledge distillation strategies can further enhance the model's performance, while exploring different quantization methods holds potential for improving storage and inference, which can be pursued in future research. § ACKNOWLEDGEMENTS This research is partly based upon work supported by the H2020 TReSPAsS-ETN Marie Skłodowska-Curie early training network (grant agreement 860813). This research is also based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via [2022-21102100007]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ieee
http://arxiv.org/abs/2307.03041v1
20230706150604
Superfluid dark matter flow around cosmic strings
[ "Heliudson Bernardo", "Robert Brandenberger", "Aline Favero" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph", "hep-th" ]
[ [ August 1, 2023 ================== We consider a cosmic string moving through a gas of superfluid dark matter (SFDM) particles and analyze how it affects the dark matter distribution. We look at two different cases: first, a cosmic string passing through an already condensed region, and second, through a region that is not yet condensed. In the former, the string induces a weak shock in the superfluid, and the Bose-Einstein condensate (BEC) survives. In the latter, a wake of larger density is formed behind the string, and we study under which conditions a BEC can be formed in the virialized region of the wake. By requiring the thermalization of the DM particles and the overlap of their de Broglie wavelengths inside the wake, we obtain an upper bound on the mass of the dark matter particles on the order of 10 eV, which is compatible with typical SFDM models. § INTRODUCTION The fundamental nature of dark matter (DM) remains a significant open problem in cosmology. Although the ΛCDM model successfully describes it, on large scales, as a fluid of collisionless particles and vanishing sound speed, small DM interactions with itself or other species would affect its small-scale distribution and behavior. In fact, there is a striking correlation between the acceleration in galaxies and the total baryonic mass within them <cit.>. These and other correlations <cit.> challenge the cold DM picture on galactic scales <cit.> but can be accounted for by the superfluid dark matter approach <cit.>. Superfluid dark matter models are essentially based on the formation of a Bose-Einstein condensate (BEC) of DM particles on galactic scales and its associated phonon excitations <cit.> (see also <cit.> for a more complete list of references). The phonons might mediate a long-range interaction between the baryons, which reproduce the modified Newtonian dynamics (MOND) of the baryonic acceleration on such scales <cit.>. Given their typical speeds in galaxies, the (supposedly bosonic) DM particles should have a mass of the order of eV or less for the BEC to be formed <cit.>. Moreover, thermal effects should be included so that the condensed region is a genuine superfluid with inviscid and normal flows, and the density profile of finite temperature superfluid cores within galaxies can be computed for different superfluid equations of state <cit.>. Finite temperature superfluids are described by the two-fluid model historically initiated by London and Tisza <cit.> but independently established by Landau <cit.> (see also <cit.> for historical notes). In this model, a superfluid has two associated flows, an inviscid and a normal one, in which entropy and temperature can only be transported by the latter. One commonly talks about a “two-component” fluid where the superfluid component is a BEC, and the normal component is composed of quasiparticles in thermal equilibrium <cit.>. The relative energy density in these components depends on the temperature, and only the normal component is present for temperatures greater than the BEC critical temperature. The Landau two-fluid model predicted two kinds of sounds in superfluids: the usual adiabatic one, where perturbations in the energy density and pressure are propagated with sound speed c_1, and another one associated with the propagation of entropy and temperature fluctuations with sound speed c_2. Explaining these sound speeds for Helium II is one of the great achievements of the Landau two-fluid model. Superfluid DM would also exhibit wave-like interference patterns on large scales <cit.> and has also been proposed as an explanation for the origin of cosmic filament spin <cit.>. The fundamental differences in the physics of cold and superfluid DM translate to new prospects for observations. In the present work, we take a step toward understanding how superfluid DM modifies cosmic string signatures <cit.>. It is well-known how the motion of a long cosmic string through a gas of DM particles affects its distribution. For collisionless DM, a wake is formed downstream of the flow, with the cosmic string at its apex, within which the density is twice the initial DM density <cit.>. These cosmic string wakes affect the accretion of baryonic matter in a statistically different way compared to the usual DM accretion, leading to distinct signatures <cit.>. In this paper, we investigate how this picture changes if the DM particles are light bosons that can undergo a phase transition and generate a BEC. We study two situations where the superfluid phase of DM particles can generate differences in the flow around moving cosmic strings compared to cold DM: firstly, the moving cosmic string can pass by an already condensed region; and secondly, the wake formation can happen at redshifts such that the DM density inside the wake is greater than a critical density necessary for a BEC. In the latter case, the DM particles might condense inside the virialized region in the wake; in the former, the cosmic string wake will generically contain a shock since cosmic string speeds are relativistic and, as such, supersonic relative to the superfluid. After some reasonable physical assumptions, this paper provides first-principle analytic computations of superfluid DM flows around moving cosmic strings. For generality, and since cosmic strings move at relativistic speeds, we consider the relativistic effective field theory approach to superfluids. Relevant aspects of this description are reviewed in the next section. In section <ref>, we solve the Taub-Rankine-Hugoniot junction equations for linearized and strong shocks in the superfluid. In section <ref>, we estimate the redshift where BEC can be formed in the cosmic string wake as a function of the DM particle mass. We discuss applications and prospects for future directions in section <ref>. § FIELD THEORY APPROACH TO SUPERFLUIDS There are different but equivalent formalisms generalizing Landau's two-fluid model to the relativistic case <cit.> (see also <cit.> and references therein). To set some notation and make the discussion self-contained, in this section, we review some of the results on the effective field theory (EFT) description of superfluids as presented in <cit.>, which we will follow closely. For simplification, we will assume that the normal component is dissipationless. From the EFT perspective, below the superfluid's critical temperature, T_c, the superfluid component is a BEC described by a state that spontaneously breaks a global U(1) symmetry. This U(1) symmetry is associated with the conservation of particle number in the fluid. By the Goldstone theorem, there is a gapless excitation ψ which non-linearly realizes the U(1) symmetry as ψ→ψ +a, a = const. The most general action that is Poincaré invariant and compatible with this symmetry has the form S = ∫ d^4x F(X), with X = ∂_μψ∂^μψ, and the U(1) current is given by j^μ (x) = 2F'(X)∂^μψ. A homogeneous and isotropic BEC state (or superfluid phase at T=0) can be described by ψ= μ t, since this implies a state of uniform charge density and vanishing spatial current, j^μ = -2μ F'(-μ^2) (1,0,0,0). The equation of motion for ψ is equivalent to the conservation of particle number, ∂_μ j^μ = 0. Note that if we define a four-velocity satisfying u^μ∝∂^μψ, the system is equivalent to a fluid with an irrotational flow. Now, at finite temperature, there will be excitations in the fluid/gas, and not all particles will be condensed in the ground state: some particles will, instead, occupy excited states. At finite T, these perturbations will reach thermal equilibrium, and if they are in a regime where the mean free time and path of a phonon are much smaller than the spacetime volume occupied by the fluid, this thermal bath of phonons will be described by usual hydrodynamics. However, as T approaches T_c, the symmetry is restored, the BEC is gone and only the normal component remains. To describe the normal fluid component in field theory, we use embedding coordinates ϕ^i of the fluid. They map spacetime points to positions of the fluid elements, x^μ→ϕ^i(x,t), i=1,2, 3 <cit.>. At a fixed time, these maps should be invertible (∂_i ϕ^i≠ 0), and, if the fluid is incompressible, also volume preserving (∂_j ϕ^i =1). For dissipationless fluids, given Poincaré invariance and the homogeneity and isotropy of the fluid's internal space, the ϕ^i should enter the action through the combination <cit.> J^μ = 1/6ϵ^μαβγϵ_ijk∂_αϕ^i ∂_βϕ^j ∂_γϕ^k. With ∂^μψ and J^μ, we can construct three scalar quantities <cit.>, X = ∂_μψ∂^μψ, b = √(-J^μ J_μ), and y = -1/b J^μ∂_μψ. The vector u^μ =- J^μ/b is actually the four-velocity of the normal fluid component, since it is normalized to -1 and the fluid's comoving coordinates do not change along its integral curves, u^μ∂_μϕ^i = 0. In summary, the low-energy Lagrangian density describing superfluidity will have the form ℒ = F(b, X, y). All the infrared dynamics of finite-temperature relativistic superfluids are encoded in that Lagrangian. It can be shown that one can recover two sound speeds for perturbations around an equilibrium solution for ψ and ϕ^i, which are the relativistic versions of Landau's two sounds <cit.>. To recover the superfluid's hydrodynamics, we first write the superfluid action for an arbitrary metric S = ∫ d^4x √(-g) F(b,X,y) and then compute its associated energy-momentum tensor T_μν = 2XF_X ũ_μũ_ν + (yF_y -bF_b)u_μ u_ν + (F-bF_b)g_μν, where ũ_μ = -∂_μψ /√(-X). Note that X, b, and y now depend on the metric. Moreover, the current associated with the U(1) symmetry ψ→ψ + const. is j^μ = -2√(-X)F_Xũ^μ + F_y u^μ, and its conservation physically means the conservation of particle number. Both T_μν and j^μ have contributions from the superfluid and normal components. Now, we can start identifying the fluid variables. In the frame comoving with the normal fluid, we have T_μν u^μ u^ν = yF_y -F -2y^2F_X = ρ and -j^μ u_μ = F_y -2yF_X = n, where, by definition, ρ and n are the energy and number densities that can be measured in that frame. To identify the fluid stresses, we contract the energy-momentum tensor with the projector associated to the spacelike directions perpendicular to u^μ , T^μν(η_μν +u_μ u_ν) = 3(F-bF_b) - 2F_X (X+y^2). So, we identify p = F- bF_b as the pressure, and the term ∝ (X+y^2) as a contribution from the anisotropic stress-tensor (this could also be seen directly from T_μν). The chemical potential μ, entropy density s and temperature T are identified after assuming the first law of thermodynamics <cit.> (see also <cit.>): μ = y, s = b, and T= -F_b. The entropy current J^μ = s u^μ = b u^μ, is identically conserved (as expected for a non-dissipative fluid). Computing the differential of p, we obtain <cit.> dp = s dT + n dμ + 2F_X ξ dξ, where ξ is the modulus of the spacelike four-vector ξ^μ = (η^μν +u^μ u^ν)∂_νψ. So, the pressure is a function of T, μ, and ξ. This defines the equation of state of the superfluid. Once this function is given, we can construct the Lagrangian as ℒ = F = p + b F_b = p- sT = p- T dp/dT, while expressing the result as a function of b = s = dp/dT, y = μ, and X = ξ^2 -y^2. This maps the thermodynamic and field theory descriptions, and vice-versa. To better understand the physical meaning of ξ^μ, recall first that ũ^μ = - ∂^μψ/ √(-X), and so ũ^μ = y/√(-X)u^μ - 1/√(-X)ξ^μ . Contracting with u^μ gives γ = - ũ^μ u_μ = y/√(-X), where γ is the Lorentz factor for the velocity of one component as measured in the other component's frame. On the other hand, contracting with ũ^μ gives -1 = ũ^μũ_μ = -γ^2 - 1/√(-X)ξ^μũ_μ. In the normal component frame, u^μ = (1,0), ξ^μ = (0, ξ^i), and ũ^μ = γ(1, v^i), where v^i is the superfluid component velocity. Thus, γ^2 -1 = -1/√(-X)γ ξ^i v_iγ^2 v^2 = -γ/√(-X)ξ^i v^i, and we conclude that ξ^i = -y v^i. When there is no relative velocity between the normal and superfluid parts, ξ^μ = 0 and y = √(-X). § SHOCKS IN THE SUPERFLUID FLOW An interesting property of having a fluid described by two velocity fields is that it will generically have anisotropies. This can be seen, for instance, from the spatial components of T_μν above: as long as u^μ≠ũ^μ there will be anisotropic stresses. Moreover, there is a non-vanishing momentum flux density in one of the frames comoving with one of the fields. For instance, in the frame where u^μ = (1,0,0,0), we have T^i0 = 2X F_X γ^2 v^i, where v^i is the relative velocity between the superfluid and normal components as measured in the latter's frame, and γ is its associated Lorentz factor. In the following, we shall redefine the two timelike vector fields ũ^μ and u^μ in order to make the intrinsic superfluid anisotropy manifest. This will also select the frame in which there is no momentum flux, and so in some sense, it might be thought of as the “center of mass” frame for the superfluid. A similar approach was considered in <cit.>, but in the context of a non-interacting two-perfect fluid system, which is physically distinct from a superfluid. We define four-vectors V^μ and W^μ as V^μ = cosα ũ^μ + R sinα u^μ, W^μ = -1/Rsinα ũ^μ + cosα u^μ, where R = √(yF_y - bF_b/2X F_X). These definitions are such that T^μν = 2X F_X V^μ V^ν + (yF_y -bF_b)W^μ W^ν + (F-bF_b)g^μν, for any choice of α. We shall fix α by demanding that V^μ W_μ = 0, V^μ W_μ = 0 tan 2α = 2R/R^2-1 u^μũ_μ = 2R/1-R^2γ. In this way, V^μ is timelike while W^μ is spacelike[Actually, V^μ W_μ=0 alone doesn't fix which vector is timelike. We also need to assume R<1. If this is not so, one needs to redefine R→ 1/R, otherwise W^μ will be timelike. We have assumed, without loss of generality, that 2XF_X > yF_y-bF_b, which is most reasonable at very low-temperatures.], and there is no momentum flux in the frame where V^i =0 because in this case, we should have W^0 = 0, and so T_i0 = 0. Note that the Lorentz factor after the last equality is associated with the relative velocity between ũ^μ and u^μ. In terms of the normalized vectors U^μ = V^μ/√(-V^ρ V_ρ) and A^μ = W^μ/√(W^ρ W_ρ), we have T_μν = 2X F_X(-V^ρ V_ρ)U_μ U_ν+(yF_y - b F_b)(W^ρ W_ρ) A_μ A_ν + (F-bF_b)g_μν. Computing V^2 and W^2, we find V^μ V_μ = -1/2(1+R^2) - 1/2(1-R^2)/cos 2α, W^μ W_μ = -1/2(1+ 1/R^2)-1/2(1-1/R^2)1/cos 2α, and the (cos 2α)^-1 in these expression is given by 1/cos2α = [(2XF_X + yF_y -bF_b)^2 +4(2XF_X)(yF_y - bF_b)((ũ^μ u_μ)^2-1)]^1/2/|2X F_X - (yF_y - bF_b)|. So, we can write T_μν =(ρ_U+p_U) U_μ U_ν + p_U g_μν + (p_A - p_U) A_μ A_ν, where we have defined ρ_U = -F +1/2(2XF_X + yF_y +bF_b)+ +1/2[(2XF_X + yF_y -bF_b)^2 +4(2XF_X)(yF_y - bF_b)((ũ^μ u_μ)^2-1)]^1/2, p_A = F -1/2(2XF_X +yF_y +bF_b) + + 1/2[(2XF_X + yF_y -bF_b)^2 +4(2XF_X)(yF_y - bF_b)((ũ^μ u_μ)^2-1)]^1/2, p_U = F- bF_b. Note that, in the absence of relative velocity, ũ^μ u_μ = -1, we have p_A= p_U, and the anisotropic term vanishes. Also in this case, R= tanα and V^μ V_μ|_γ=1 = -1/cos^2 α = -(1+R^2), W^μ|_γ=1 = 0. Moreover, the energy density ρ_U and pressure p_U coincide with the ones discussed in section <ref> after setting y = √(-X). Using the same definitions in the expression for the number current, we have j^μ = √(-V^2)(-2√(-X)F_X cosα+ F_y/Rsinα) U^μ + √(W^2)(2R√(-X)F_X sinα + F_y cosα) A^μ = n_U U^μ + j_A A^μ, where n_U = - j_μ U^μ and j_A = j_μ A^μ are the particle number density and flux in the frame comoving with U^μ. We see that, although there is no momentum flux in such a frame, there is still a flux of particles, which vanishes if there is no relative velocity between the components. In this section, we study how a straight cosmic string extended along the z-axis and moving with constant velocity -v_+ ∂_x affects a cylindrical symmetric superfluid configuration, with symmetry axis along the string. Equivalently, we shall analyze the flow in the cosmic string rest frame, where the superfluid as a whole is moving with speed v_+ in the x-axis. In such symmetric configurations, chosen to simplify the analysis, the only anisotropy is parallel to the string, and we take A^μ = (0,0,0,1) in the string comoving frame. Such symmetric configurations also include the homogeneous case ψ(x) = y_0 t, ϕ^i(x) = b_0^1/3x^i y(x) = y_0 =√(-X), b(x) = b_0, ũ^μ = (1,0,0,0) = u^μ, where there is no relative velocity between the components, and there are no anisotropies at all. The resulting flow solution is also a good approximation for the one when the relative velocity is very small. Note that, in the non-relativistic regime, the relative speed between the superfluid and normal velocity fields is much smaller than the string speed. In the cosmic string rest frame, the metric is ds^2 = -dt^2 + dr^2 + dz^2 + (1- 4 G μ)^2 r^2 dϕ^2, where μ is the string tension. Firstly, we shall use coordinates where the cosmic string metric is still Minkowskian, but the new axial angle ϕ̃ = (1-4 G μ) ϕ has a deficit proportional to the string tension: x = r cos[(1-4 G μ)ϕ + 4 π G μ], y = rsin[(1-4G μ)ϕ+ 4π G μ], 0≤ϕ̃<(1-4G μ)2π. In these coordinates, we have ds^2 = -dt^2 + dx^2 + dy^2 + dz^2, and the wedge -ϵ x≤ y ≤ϵ x, ϵ = tan(4π G μ), is left uncovered. The line segments y_± = ±ϵ x, for x>0, correspond to ϕ =0 and ϕ =2π, and so should be identified. In other words, a total wedge angle of 8π G μ is missing in the conical spacetime transverse to a cosmic string. Note that the shift 4π G μ inside the argument of the trigonometric functions in the definition of the Cartesian coordinates only sets the position of the wedge, which in this case is along the positive x axis. When the string passes by a gas of collisionless DM particles with a spatial axial distribution along the string, individual particles in the y>0 (y<0) plane receive an impulse towards the negative (positive) y axis. This produces a wake with a total aperture angle 8π G μγ_+ (for small G μ and order-one Lorentz gamma factors), within which the mass density is initially twice the density outside the wake <cit.>. However, for a fluid with finite sound speed c_s, the flow around the cosmic string is more involved and strongly depends on the ratio c_s/v between the sound and string speeds. In fact, for supersonic string speeds, the flow exhibits a shock, i.e. a surface of discontinuity in the fluid variables downstream of the flow past the string. This is similar to the shocks in the flow of baryonic matter, as studied in <cit.>, where shells of baryonic matter collide due to the relative velocity induced by the string. Note, however, that such shocks differ from the later ones that originate from gravitational instabilities inside the wake <cit.>. For superfluids at finite temperatures, there are always two intrinsic sound speeds, but we shall see that only the adiabatic sound speed is relevant for the presence of shocks. Since cosmic string speeds are relativistic, we shall use the formalism of relativistic shocks to describe the jump in energy density, velocity, and pressure across the shock <cit.> (see also <cit.>). Cosmic string induced shocks were first studied in <cit.>, although in the non-relativistic setting. A relativistic analysis for perfect fluids with a polytropic equation of state was carried out in <cit.>, for the strong shock case. In the following, we shall adopt the more general results of <cit.>, where shocks in relativistic perfect fluids were studied and, for linearized shocks, no equations of state were assumed. The fundamental equations for relativistic shocks are obtained from integrating the local conservation equations ∇^μ T_μν = 0, ∇_μ j^μ =0 across the shock front. This results into [j^μ] n_μ =0, [T^μν] n_ν =0, where [A] = A_+ - A_- denotes the difference between the value of a variable A in front of and behind the shock, and n^μ is the unit vector normal to the discontinuity surface. These are the relativistic (and covariant) versions of the Rankine-Hugoniot junction condition equations <cit.>. The stationary physical configuration considered is depicted in Figure <ref>. For the cosmic string solution above, the shock front is perpendicular to the xy-plane and has a normal vector of the form n^μ = (0, -sinΦ, cosΦ, 0), where the half-angle of the shock Φ is to be determined. For the symmetric superfluid configurations discussed before we have X^μn_μ = 0, and the Taub-Rankine-Hugoniot equations give [(ρ_U+p_U) U^μ U^ν + p_U η^μν] n_ν = 0, [n_U U^μ] n_μ =0. We shall drop the sub-indices U in the fluid variables for the rest of this section. In the stationary case, the velocity field U^i (in Cartesian coordinates) is parallel to the x-axis before the shock front and to the wedge after the shock front, U^μ_+ = (γ_+, U_+, 0, 0), U^μ_- = (γ_-, U_-, ϵ U_-,0). Note that, in the notation above, U = γ v, where v is the three-speed and γ = √(1+U^2). Hence, 0≤ U<∞. In the fluid's rest frame, the cosmic string speed is v_+. Equation (<ref>) thus gives n_- U_-(α-ϵ)= n_+ U_+ α, where α = tanΦ. Meanwhile, the μ =0, 1, 2 components of (<ref>) give, respectively, (ρ_- + p_-)(α-ϵ)γ_- U_- =(ρ_+ + p_+)αγ_+ U_+, (ρ_- + p_-)(α-ϵ)U^2_- + α p_- =α(ρ_+ + p_+)U^2_+ + α p_+, p_- + (ρ_- +p_-)ϵ (ϵ -α)U^2_- = p_+ . The last three equations have four unknowns, α, U_-, ρ_-, and p_-, and so we need an equation of state behind the shock to solve for them exactly. Having found U_- and α, we might solve (<ref>) for n_-. However, for weak shocks, in which the change in the fluid variables across the shock is small, we can relate energy density and pressure perturbations using the fluid's adiabatic sound speed. In that case, we can find the variation in the fluid variables for an arbitrary equation of state. Such solutions should exist, for a fixed U_+, provided the deficit angle is small enough and the change in the fluid variables are of order G μ. To find the weak shock solution, we write U_- = U_+ + δ U, ρ_- = ρ_+ +δρ, p_- = p_+ +δ p, and make the approximation δ p ≈ c_s^2 δρ, where c_s^2 = (∂ p/∂ρ)_s is the fluid's adiabatic sound speed. Now, we need to solve equations (<ref>) to first order in G μ. Firstly, we use equations (<ref>) and (<ref>) to write p_- in terms of known quantities and α, p_- =p_+ + ϵα/1+ϵα(ρ_+ + p_+)U^2_+. Then we plug this result into (<ref>) to find U_-^2(ρ_- +p_-)(α-ϵ) = α/1+ϵα(ρ_+ + p_+)U^2_+, and inserting this into (<ref>) yields, after some algebraic manipulations, U_-^2 = U_+^2/(1+ϵα)^2(1+U_+^2)-(1+ϵ^2)U_+^2. Perturbing (<ref>) and (<ref>) we find, to first order in ϵ, δ U ≈ - ϵα U_+(1+U_+^2), δ p ≈ϵα (ρ_+ + p_+)U_+^2. Using these results and one of the original equations, we can find α in terms of c_s and known quantities. For instance, perturbing (<ref>) gives α[(δρ + δ p)U^2_+ + 2(ρ_+ +p_+)U_+ δ U + δ p] - ϵ (ρ_+ + p_+)U_+^2 =0, to first order in ϵ. Inserting the expressions for δ U and δ p into this equation yields tanΦ = α≈U_s/√(U^2_+ - U_s^2), where U_s = γ_s c_s = c_s/√(1-c_s^2). Note that for U_+ ∼ U_s, Φ≃π/2, which matches the high-speed limit of subsonic flow <cit.>. However, for string speeds much higher than the sound speed, α decreases and the shock becomes a thin wedge in the wake of the string through the fluid. In summary, to first order in the deficit angle, the linearized shock solution to the relativistic Taub-Rankine-Hugoniot equations is <cit.> δ U ≈ - ϵα U_+(1+U_+^2), δ p ≈ϵα (ρ_+ + p_+)U_+^2, δρ ≈ϵα (ρ_+ + p_+)(1+U_s^2)U_+^2/U_s^2, α ≈U_s/√(U^2_+ - U_s^2). The linearized solution above cannot be trusted for string speeds very close to the speed of light, or for very non-relativistic sound speeds: in those cases, the perturbations are not so small compared with the values outside the shock. For instance, the condition δρ < (ρ_+ + p_+) gives ϵU_s/√(U_+^2 -U_s^2)(1+U_s^2)U_+^2/U_s^2<1, which can be violated as U_+/U_s →∞ for a fixed ϵ. In this limit, the shock is very strong and α becomes very small, of the order of ϵ. Let us assume that this is the case and estimate the change in the fluid variables after taking α to be larger but of the same order of the deficit angle: α∼ 4π G μ, α - 4π G μ∼ 4π G μ. So, expanding (<ref>) for large U_+ and α∼ϵ≈ 4π G μ≪1, we get U_-^2 ∼1/3ϵ^2∼1/48 π^2 G^2μ^2, while (<ref>) gives p_- ∼ 2ϵ^2 (ρ_+ +p_+)U_+^2 ∼ 32π^2 G^2μ^2(ρ_+ +p_+)U_+^2. Using these results into (<ref>), we also find ρ_- ∼ 4 ϵ^2 (ρ_+ +p_+)U_+^2 ∼ 64 π^2 G^2μ^2 (ρ_++p_+)U_+^2. Hence, we conclude that, for U_+ ≫(G μ)^-1, the jump in energy density and pressure is very large. Presumably, the jump in temperature is also very big, and the BEC cannot be maintained inside the shock. In fact, at very low temperatures, the equation of state for a gas of weakly interacting bosonic particles has a weak dependence on the number density of particles. In the non-interacting case, p ∝ T^5/2, and so a large increase in pressure is accompanied by a large increase in temperature. More realistically, we should also take into account the change in number density, since it might be large enough to imply a higher value of T_c inside the strong shock. However, from (<ref>), we get n_- = n_+ U_+/U_-α/α-ϵ∼ 2√(3) n_+ U_+ ϵ, and so, although significant, the fractional change in the number density is comparatively smaller than in the ones associated with energy density and pressure. Hence, we conclude that, generically, the temperature in the strong shock will increase, such that T_-> T_c and the BEC is destroyed in the wake of the moving string. Fortunately, for DM condensed in the superfluids phase, the sound speed is non-relativistic and, moreover, the typical speed of large sections of long cosmic strings is not so close to the speed of light. So, realistically, the linearized solution is a good approximation for the fluid variables inside the shock. In fact, the typical sound speed in the core of spherical DM superfluid condensates is c_s ∼ 10^-5c (as derived from the solution in <cit.>). In this case, the condition δρ< (ρ_+ + p_+) for the linearized solution to be consistent gives U_+ < c_s/4π G μ, and so U_+ < 10^-6(G μ)^-1. For G μ∼ 10^-7, we get U_+ < 1, and so weak shocks require a Lorentz factor associated to v_+ at most of order unity, which is the case for cosmic string speeds. § SUPERFLUID PHASE INSIDE COSMIC STRING WAKES In this section, we want to analyse under which conditions a BEC can be formed inside the wake of a cosmic string. For the condensate to be formed, two conditions need to be satisfied <cit.>: first, the thermal de Broglie wavelength λ_th of the DM particles has to be larger than the characteristic inter-particle separation l = n^-1/3, and second, the DM particles have to thermalize. The first condition, λ_th = √(2 πħ^2/m k_B T)≳ l = (1/n)^1/3, is equivalent to requiring that the de Broglie wavelength of DM particles overlap in the region where we want them to condense. Equivalently, at a fixed temperature T, the BEC forms when the number density of particles is larger than a critical density n_c<cit.>, n > n_c≡ζ(3/2)/λ_th^3, where ζ(3/2) is the Riemmann zeta function. Assuming that the DM particles follow the velocity dispersion k_B T ∼ m v^2/2 and using ρ = nm, the previous inequality reduces to the following bound on the mass: m^4≲(4 πħ^2)^3/2/ζ(3/2)ρ/v^3. Now, let us apply this condition to the wake of a long string. We consider a wake formed at a time t_i > t_eq (wakes formed ∼ t_eq have the largest surface density). Due to the passing of the string, the comoving coordinates x^i of the DM particles are perturbed relative to the Hubble flow, such that their physical position is a(t) [x^i + ψ^i (x^j,t)]. DM accretion into wakes can be described by the time evolution of ψ^i. Assuming the Zel'dovich approximation and the wake along the x-axis, such that only ψ = ψ^y is non-trivial, we have <cit.> ψ(t,y) = -3/5u_i t_i[(t/t_i)^2/3- (t/t_i)^-1] [θ(y)- θ(-y)]/2, where θ(y) is the Heaviside step function and u_i = a(t_i)^-1 4 π G μ v_s γ_s is an initial velocity boost given by the string to nearby particles. Here, v_s is the velocity of the string, and γ_s = (1-v_s^2)^-1/2 the corresponding Lorentz factor. For t≫ t_i, only the first term in the brackets significantly contributes to the solution. The physical distance of a DM particle to the wake, which initially increases because of the Hubble flow, becomes maximal at a time t̅ and eventually starts decreasing due to the gravitational pull of shells of matter. So, we have DM shells that turn around at t. The physical height of the turnaround surface above the wake's center is h(t̅) = a(t̅) |ψ(t̅)| = 3/5u_i t_i (t̅/t_i)^2/3(t̅/t_0)^2/3. The velocity of the particles inside the virialized region in the wake is v_vir≡ v(t_v) = a|ψ(t_v)| =2/5 u_i(t_i/t_0)^1/3(t_v/t_0)^1/3 = 2^3/4/5 u_i(t_i/t_0)^1/3(t̅/t_0)^1/3, where t is the time when a shell of particles turns around and falls into the wake, and t_v = 2^-3/4t̅ is the time when the shell enters the virialized region, which we assume to have a height of h(t)/2 above the wake's center. The density of DM particles inside this region, ρ_vir, is four times the background density. Assuming matter domination and H_0 = h × 2 × 10^-33eV, we obtain an upper bound on the mass of the DM particles, m ≲ 31 (h/0.67)^1/2(G μ/10^-7)^-3/4(v_sγ_s/1/√(3))^-3/4(1 + z_i/1 + z_eq)^-3/8(1 + z̅)^9/8eV. Figure <ref> shows a plot of the upper bound on the mass m in eV as a function of the redshift z; the shaded region is the range of allowed masses. Hence, at around the epoch of reionization, z ∼ 10, the DM particles have to be lighter than ∼ 455 eV for the BEC to form. The second condition, the requirement that the DM particles thermalize, can be written as Γ t_dyn≳ 1, where Γ is the DM self-interaction rate, and t_dyn the time associated to the wake dynamics. For scalar DM, the former is given by <cit.> Γ = 𝒩 v ρσ/m, 𝒩 = ρ/m(2 π)^3/4 π/3(m v)^3, where σ is the DM self-interaction cross-section and the Bose enhancement factor 𝒩 takes into account the fact that the DM particles interact over an excited background state. The dynamical time can be estimated as the time it takes for a DM particle to cross the virialized region, t_dyn≈h(t̅)/v_vir = 3/2^3/4t = 3/2^3/4 t_0 (1+z)^-3/2. Combining those expressions, we obtain a bound on σ/m of σ/m ≳ 4 × 10^-2(m/eV)^4(G μ/10^-7)^2(v_sγ_s/1/√(3))^2(h/0.67)^-3(1 + z_i/1 + z_eq) (1 + z̅)^-11/2cm^2/g. Equivalently, σ/m ≳ 2 × 10^2 GeV^-3, for the same fiducial value of the parameters. We can estimate the critical temperature of the condensate by computing the critical velocity v_c that saturates the first condition on the mass, v_c = (4πħ^2)^1/2/ζ^1/3(3/2)(ρ/m^4)^1/3, and then use T_c ∼ m v_c^2/(2k_B): T_c ∼m/2k_B(4πħ^2)/ζ^2/3(3/2)(ρ/m^4)^2/3 = 11 (m/eV)^-5/3(h/0.67)^4/3(1+z̅)^2 mK. For T<T_c, we have the superfluid phase. We can estimate the fraction of DM particles in the BEC component after neglecting interactions. In this case, the fraction is just the one for an ideal quantum gas of bosonic particles, N_0/N≈ 1- (T/T_c)^3/2. The gas temperature in units of T_c is given by T/T_c∼ 1 × 10^-4(h/0.67)^-4/3(m/eV)^8/3(G μ/10^-7)^2 (v_sγ_s/1/√(3))^2(1+ z_i/1+z_eq)(1+z)^-3, and so N_0/N≈ 1- 1 × 10^-6(h/0.67)^-2(m/eV)^4(G μ/10^-7)^3 (v_sγ_s/1/√(3))^3(1+ z_i/1+z_eq)^3/2(1+z)^-9/2. Thus, most of the particles are found in the BEC, regardless of the turnaround redshift. § DISCUSSIONS AND CONCLUSION In this paper we studied the motion of a cosmic string through a gas of superfluid dark matter particles. We first reviewed the necessary formalism to approach the subject, an effective field theory approach to superfluids. We then studied two distinct cases: a cosmic string passing through a BEC, and a string moving through a region where the DM is not condensed. In the first case, we looked at the shock induced in the fluid by the string and solved the Taub-Rankine-Hugoniot junction equations. For usual cosmic string speeds, we concluded that the shock is weak and therefore the DM remains in the superfluid phase after the passage of the string. For extreme cases in which the cosmic string travels at velocities very close to c, we found that the large jump in energy and pressure across the shock leads to an increase in the temperature and the subsequent destruction of the condensate. In the second case, we studied under which conditions the DM might condense into a superfluid phase. A string moving through a fluid leads to the formation of an overdensity in its wake. Similarly to what happens in galaxies, this increase in the DM density can cause it to condense into a superfluid, provided that two conditions are satisfied: first, the de Broglie wavelengths of the DM particles have to overlap inside the wake, and second, the particles have to thermalize. The former condition was translated into an upper bound on the mass of the DM particles, m ≲ 31 eV for a wake formed at z_eq, Gμ∼ 10^-7 and string speeds ∼ 0.5. The latter condition led to a lower bound on the ratio between the interaction cross-section and mass, σ/m ≳ 4 × 10^-2 cm^2/g for the same parameters, in agreement with constraints on the cross-section of self-interacting DM <cit.>. As can be seen from <cit.>, these bounds are compatible with the ones in models of superfluid DM in galactic scales. Finally, we computed the critical temperature below which de DM condenses, and the result is in the mK range for the same parameters used to estimate the previous bounds. As future directions, one could study how the presence of a BEC inside the cosmic string wake leads to new observational signatures. A new baryonic interaction that arises from the coupling to the phonons would modify the equations describing baryonic accretion. This should lead to changes in the thickness and/or shape of the wake, and in consequence, to modifications in the wake signatures. Such new features would affect wake signals in 21-cm surveys, CMB polarization, and large-scale structure maps <cit.>. As mentioned, cosmic strings moving through a condensate at ultra-relativistic speeds are expected to destroy the condensate. Since the typical speeds of long cosmic strings are much lower, a more relevant scenario might be oscillating loops in a condensate: in the context of halo accretion, loops oscillating at speeds close to c will heat up the fluid above T_c and destroy the condensate. This affects the usual scenario of DM accretion into loops. On a more speculative note, the general dynamics of the superfluid around moving cosmic strings might be such that vortices are formed. The general motion of vortices in the conical geometry transverse to cosmic strings has been studied in <cit.>, although in the non-relativistic limit. Since cosmic strings are relativistic, it would be interesting to generalize these results. Superfluid vortices generated by cosmic string motion would contribute to the spin of cosmic filaments, as studied in <cit.>, and leave strong-lensing observable imprints on the dark matter halo substructure <cit.>. Moreover, it would be worth exploring how a change in the nature of the superfluid could modify our results, such as in the dark-charged superfluid model of <cit.>. § ACKNOWLEDGMENTS We would like to thank Stephon Alexander for comments on an early version of this work. H.B. was supported by the Fonds de recherche du Québec (PBEEE/303549). Research at McGill is partially supported by funds from NSERC and the Canada Research Chair program. bibstyle
http://arxiv.org/abs/2307.00372v1
20230701160522
Launcher Attitude Control based on Incremental Nonlinear Dynamic Inversion: A Feasibility Study Towards Fast and Robust Design Approaches
[ "Pedro Simplício", "Paul Acquatella", "Samir Bennani" ]
eess.SY
[ "eess.SY", "cs.SY" ]
(1)]Pedro Simplício[Corresponding author, email: ] (2)]Paul Acquatella (3)]Samir Bennani [(1)] Aurora Technology for the European Space Agency, Noordwijk, The Netherlands [(2)] DLR, German Aerospace Center, Oberpfaffenhofen, Germany [(3)] European Space Agency, Noordwijk, The Netherlands LAUNCHER ATTITUDE CONTROL BASED ON INCREMENTAL NONLINEAR DYNAMIC INVERSION: A FEASIBILITY STUDY TOWARDS FAST AND ROBUST DESIGN APPROACHES [ ========================================================================================================================================= The so-called "New Space era" has seen a disruptive change in the business models and manufacturing technologies of launch vehicle companies. However, limited consideration has been given to the benefits that innovation in control theory can bring; not only in terms of increasing the limits of performance but also reducing mission preparation or “missionisation” efforts. Moreover, there is a gap between the current state-of-practice that still relies on linear controls and other modern control techniques that could bring relevant improvements in launcher attitude control; this is the case for nonlinear control algorithms, especially those based on Nonlinear Dynamic Inversion (NDI). NDI is a technique that basically `cancels' the nonlinearities of a class of nonlinear systems, allowing for a single linear control law to be applied without the need for gain-scheduling across different operational points. Incremental NDI (INDI) is a variation of NDI that generates incremental commands and employs acceleration feedback to reduce model dependency, making it easier to design, and results in being more robust in closed-loop. While INDI has been applied successfully to several aerospace applications, its applicability to launch vehicles has not yet been adequately investigated. The objective of this paper is therefore to introduce and raise awareness of the INDI method among the launcher guidance, navigation, and control (GNC) community, showcasing its implementation on a representative launch ascent application scenario which highlights INDI's strengths and challenges. We present a new, practical approach for stability analysis of INDI for attitude control, and compare INDI with scheduled PD controllers with- and without angular acceleration estimates. Results show that, while INDI controllers are generally more sensitive to sensor noise and actuator delay than linear controllers, their potential benefits outweigh these limitations in terms of robustness and performance. fancy § INTRODUCTION §.§ Background and Motivation The space industry has undergone significant changes in recent years with the advent of the “New Space era” marked by disruptive changes in the business models, manufacturing technologies, and agile practices of launch vehicle companies; all aimed at minimising their production and operating costs in an ever more competitive market. However, limited attention has been given to the benefits of control theory innovation in this context despite the potential for such innovations to increase performance limits and reduce mission preparation (or “missionisation”) efforts. Moreover, government-led developments of recent launchers such as Ares I and VEGA still use the same design approach of the Saturn V, i.e. linear controllers <cit.>. This approach relies on single channel-at-a-time tuning and ad–hoc gain-scheduling followed by extensive validation and verification (V&V); these are in fact quite time- and cost-consuming processes. In contrast to the approach presented above, the past few years have seen a growing interest in the application of artificial intelligence and machine learning methods for launcher GNC, but the industrial use of such data-driven/model-free methods remains limited by well-known issues related to training and certification of the algorithms on the full flight envelope of intended operation. In that sense, there is a clear gap between these strategies and the current state-of-practice, in which other techniques could bring relevant improvements; this is the case for nonlinear control algorithms, especially those based on Nonlinear Dynamic Inversion (NDI). On one hand, agile practices of New Space companies provide the ideal opportunity to explore the benefits of this type of design approach. On the other hand, a successful adoption of nonlinear launcher control will likely facilitate the augmentation with and transition to data-driven methods in the future. This is therefore our motivation and aim for this paper, to start bridging the gap between these two approaches while presenting a potential alternative based on incremental nonlinear control. §.§ Related Work In this paper we introduce briefly and focus on (Incremental) Nonlinear Dynamic Inversion (NDI) which is a control design method based on feedback linearisation <cit.>; it basically consists on a nonlinear (state feedback) transformation that linearises the nominal system dynamics, and a linear part that imposes the desired closed-loop dynamics. Actually, NDI is a very well known and applied (nonlinear) control technique in the aerospace field, especially in aeronautics for various flight control applications <cit.>. Successful implementation of NDI requires a match between the onboard model and the system model, and accurate knowledge of all nonlinearities, which is often not the case in reality; this results in poor robustness properties because they rely on exact availability of the system dynamics. This highlights the need for robustness in these methodologies, as the inner-loop of the control system is critical and can be compromised by model and sensor uncertainties, potentially affecting stability and performance. In this regard, alternative methods involving robustness and improvements of the method for NDI-based flight control applications were considered, among many others, in <cit.>. A successful technique that became popular in the recent years for aerospace applications is Incremental Nonlinear Dynamics Inversion (INDI). The concept using incremental nonlinear control was first developed in the late nineties and was initially focused on the `implicit' dynamic inversion for DI-based flight control. The works of Smith, Bacon, and others laid the foundation for these developments <cit.>, for which the term `incremental' is now more commonly used to describe this methodology as it better reflects the nature of these control laws <cit.>. Those early studies further developed the incremental approach and, since then, it has been further elaborated theoretically and successfully applied in various high-performance systems including fault-tolerant control of aircraft subjected to sensor and actuator faults <cit.>, in practice for quadrotors using adaptive control <cit.>, in real flight tests of small (unmanned) and business jet (Cessna Citation II, PH-LAB) aircraft <cit.>, but also for spacecraft attitude control <cit.>. However, its applicability to launch and re-entry vehicles has not been fully investigated but only considered in <cit.>, and planned to be flight-tested in the upcoming `Reusability Flight Experiment (ReFEx)' by DLR <cit.>. These related works have demonstrated INDI's performance and robustness against aerodynamic model uncertainties and disturbance rejection for several aerospace vehicles; hence, the potential benefits of INDI are quite relevant for reusable launchers which have much tighter dynamical couplings between online-generated trajectory and attitude control during descent flight. Moreover, due to the nonlinear nature of INDI, it has been proven difficult to attain an analytical proof of stability which has been derived in <cit.>. With this paper we aim for further close this gap towards the application of INDI for launchers with special focus on the ascent of a TVC-controlled launcher and also aim to present a new, practical approach for stability analysis of such INDI control laws applied for attitude control. §.§ Objectives and Outline It is therefore the objective of this study to introduce and raise awareness of the INDI technique among the launcher GNC community, to showcase its implementation on a representative application scenario, and to highlight its strengths and challenges in the face of the industrial state-of-practice. To achieve this, the paper provides a concise description of the NDI and INDI approach, followed by the detailed design and comparison of different control laws: linear, linear with angular acceleration feedback and INDI-based. Furthermore, the paper is also aimed to address the (mainly) two well-known challenges associated with the practical implementation of INDI-based control: * Sensitivity to sensor noise and actuator delay. By relying on angular acceleration and control input measurements/estimates, INDI controllers are generally more sensitive to sensor noise and actuator delay than classical controllers. To assess the severity of this challenge, the paper shows a comprehensive nonlinear simulation campaign with wind disturbances, uncertainties, as well as different levels of sensor noise and actuator delay. These simulations serve as a basis to analyse the sensitivity to sensor noise and actuator delay in comparison to more classical approaches and we showcase how to remediate or tackle these issues properly. * Nonlinear stability analysis. The second challenge of INDI is that, due to its nonlinear nature, attaining an analytical proof of stability is not trivial <cit.>. For this second challenge, the paper proposes a simple yet insightful linearisation-based approach to evaluate stability degradation related to an inexact feedback linearisation and to deviations from the control tuning conditions. This method provides a new way to analyse and evaluate stability analysis of the nonlinear controller using linear control techniques; since INDI is designed from the theory of feedback linearisation, this approach is very intuitive in the sense it provides a measure of degradation with respect to the feedback linearised plant and linear stability analysis can be performed. To demonstrate the benefits and challenges of the INDI approach, we showcase the method within an application scenario consisting of a launcher model during ascent flight while featuring attitude and lateral drift degrees-of-freedom, actuator dynamics, and moving-mass effects. All the controllers and filters are implemented at a sampling frequency that is compatible with current onboard capabilities (25 Hz). The outline of this paper is as follows. A brief introduction to Nonlinear Dynamic Inversion (NDI) and Incremental NDI is presented in Sec. 2. Section 3 presents the modelling aspects of the launcher application in consideration and describes the simulator used for the attitude control design and testing. Launcher attitude control designs including angular acceleration feedback are presented in Sec. 4. Time-domain robust performance results and analysis of the obtained simulations comparing the controllers studied are presented in Sec. 5, while Sec. 6 presents the frequency-domain stability results and analysis. Conclusions are finally presented in Sec. 7. § BASIC PRINCIPLES OF (INCREMENTAL) NONLINEAR DYNAMIC INVERSION §.§ Nonlinear Dynamic Inversion (NDI) Without loss of generality, we consider a multiple-input and multiple-output (MIMO) system whose number of inputs are equal to the number of outputs in order to avoid control allocation and internal dynamics problems. Let's also assume momentarily that the nonlinear system can be described affine in the inputs as: = () + () = () where ∈n is the state vector, ∈m is the control input vector, and ∈m is the system output vector, the functions () and () are assumed to be smooth vector fields on n, and ()∈n× m is a matrix whose columns are also assumed as smooth vector fields _j. For these systems, the vector of relative degree represents the number of differentiations of each output y_i, i = 1,…,m, that are needed for the input to appear <cit.>. In this brief introduction to NDI we consider = so that the relative degree of each of the outputs y_i is one; for a detailed explanation of NDI for higher relative degrees including the transformation of the nonlinear system into a normal form decomposed into an external (input–output) part and an internal (unobservable) part, the reader is referred to <cit.>. Nonlinear Dynamic Inversion (NDI) is a technique that aims to eliminate the nonlinearities present in a given nonlinear system, resulting in closed-loop dynamics that can be expressed in a linear form. To achieve this, the nonlinear system is inverted into a linear structure using state feedback, making it possible to apply conventional linear controllers. However, NDI has a significant disadvantage in that it relies on the fundamental assumption that the system model is known exactly, making it vulnerable to uncertainties. Additionally, NDI assumes that the system state is fully and accurately known, which can be challenging to achieve in practice. NDI involves applying the following input transformation <cit.>: _cmd = ^-1()( - ()) which cancels all nonlinearities in closed-loop, and a simple linear input-output relationship between the new virtual control input and the output is obtained: = In addition to being linear, an interesting feature of this relationship is that it is also decoupled, meaning that the input ν_i only affects the output _i. This property gives rise to the so-called “decoupling control law” to describe the input transformation in (<ref>), and the resulting linear system in (<ref>) is referred to as a “single-integrator” form. By utilising appropriate (linear, robust) control techniques, the single-integrator form in (<ref>) can be rendered exponentially stable. For instance, the single-integrator can be made exponentially stable through the use of: = _des = _cmd + _P where = _des defines the desired dynamics for the output vector or control variables. The feedforward term for tracking is given by _cmd, while = _cmd - represents the error vector. Here, _cmd denotes the (smooth) desired output vector, which is (in this case, since relative degree is one) at least once differentiable. The gain matrix _P∈m× m is used to ensure that the polynomials given by s + K_P_i for i = 1,…,m, become Hurwitz. The diagonal elements K_P_i of _P are then selected accordingly. As a result of using (<ref>), the desired error dynamics ė_i + K_P_i e_i = 0, become exponentially stable and decoupled, leading to e_i(t)→ 0 for i=1,…,m. §.§ Incremental Nonlinear Dynamic Inversion (INDI) Incremental nonlinear dynamic inversion (INDI) consists on the application of NDI to a system expressed in an incremental form <cit.>. To obtain a system in incremental form, first we introduce a sufficiently small time–delay λ and define the following deviation variables _0:= (t-λ), _0:= (t-λ), and _0:= (t-λ), which are the λ–time–delayed signals of the current state derivative (t), state (t), and control (t), respectively <cit.>. Moreover, we will denote Δ := -_0, Δ := -_0, and Δ := -_0 as the incremental state derivative, the incremental state, and the so–called incremental control input, respectively. Subsequently, we consider a first-order Taylor series expansion of , not in the geometric sense, but with respect to the newly introduced time–delay λ as <cit.>: =   _0 + ∂/∂[() + ()] |_=_0 =_0Δ + (_0)Δ + H.O.T =   _0 + (_0)Δ + (, λ) with: _0 = (_0) + (_0)_0 (, λ) = ∂/∂[() + ()] |_=_0 =_0Δ + H.O.T which represents a residual containing the Jacobian linearisation of the on-board model and the higher order terms (H.O.T) of the series expansion. Notice that the model–based control effectiveness g(_0) is sampled at the previous incremental time. This means an approximate linearisation about the λ-delayed signals is performed incrementally, and not with respect to a particular equilibrium or operational point of interest. Further, we consider the following time-scale separation assumption: For a sufficiently small time-delay λ and for any incremental control input, it is assumed that Δ does not vary significantly during λ. In other words, the input rate of change is much faster than the state rate of change: ϵ_INDI_TSS(t) ≡Δ := -_0 ≅ 0,  ∀ Δ which leads to: ≅  _0 + (_0)·(-_0) + (, λ)_≅ 0 or simply: Δ≅(_0)·Δ This assumption, corroborated by the fact that the perturbation term (, λ) satisfies <cit.>: lim_λ→ 0(, λ) _2 → 0, ∀ implies that the nonlinear system dynamics in its incremental form is approximated at each time-step by the model-based control effectiveness (_0). Finally, applying NDI to the system based on the approximation (<ref>) results in a relation between the incremental control input and the output of the system: = _0 + (_0)^-1(-_0) . and noticing that while implementing this control law it will be required the availability of _0 and that the incremental input _0 is obtained from the output of the actuators or estimated from an actuator dynamical model; recall it has been assumed that a commanded control is achieved sufficiently fast in regards to the actuator dynamics. The total control command along with the obtained linearising control _0=(t-λ) can be rewritten as: (t) = (t-λ) + (_0)^-1[ -(t-λ)]. This improves the robustness of the closed-loop system as compared with conventional NDI since dependency on the accurate knowledge of the plant dynamics is reduced; more specifically, the dependency on accurate knowledge of the dynamic model in () is largely decreased. Therefore, the INDI control law design is more dependent on accurate measurements or accurate estimates of _0, the state derivatives, and _0, the incremental control input, respectively. § LAUNCHER MODEL AND SIMULATOR DESCRIPTION The present study relies on a conventional 3 degrees-of-freedom launcher model in ascent flight featuring lateral drift z and pitch θ dynamics, as schematised in Fig. <ref>. These dynamics, representing the first/second time-derivatives of z as {w=ż, ẇ=z̈} and the first/second time-derivatives of θ as {q=θ̇, q̇=θ̈}, are governed by the well-known nonlinear Newton-Euler equations: mẇ = F_α+F_c+F_n-mgsinθ Jq̇ = M_α+M_c+M_n where m, J and g are the launcher's mass, lateral moment of inertia and gravity acceleration, {F_α, M_α} are the aerodynamic force/torque, {F_c, M_c} are the TVC-induced force/torque and {F_n, M_n} are the nozzle moving-mass effects, also known as tail-wags-dog (TWD). The aerodynamic force and torque are computed as: F_α = -SC_N_αQα M_α = -l_αF_α where S, C_N_α and l_α are the reference aerodynamic area, lateral force gradient and aerodynamic arm (distance between the launcher's centres of pressure and gravity). Qα is the aerodynamic load indicator, defined as the product between aerodynamic pressure and angle of attack, which are respectively given by: Q = 12ρ V^2 α = θ+arctanw-l_αq-v_wV where ρ is the air density, V is the total airspeed and v_w is the lateral wind turbulence speed. The term l_αq is often known as aerodynamic damping. The TVC-induced force and torque are computed as: F_c = -Tsinβ M_c = l_cF_c where T is the thrust magnitude, l_c is the TVC arm (distance between the launcher's centre of gravity and nozzle's pivot point) and β is the TVC deflection angle. Finally, the nozzle TWD effects are computed as: F_n = -m_nl_nβ̈ M_n = l_cF_n-J_nβ̈ where m_n is the nozzle moving-mass, l_n is the moving-mass arm (distance between the nozzle's centre of gravity and pivot point), β̈ is the TVC deflection acceleration and J_n is the nozzle moment of inertia with respect to the pivot point (not to the centre of gravity). Most of the model's parameters vary along the launcher's trajectory (this dependence was not evidenced in the previous equations for the sake of readability) and are highly uncertain. These parameters were extracted as a function of time from the simulator presented in <cit.> for a 80 seconds trajectory. The uncertainty levels assumed in this study are summarised in Table <ref>. Note that while mass/propulsion parameters have an explicit dependency on time, related to the way the propellant burns, aerodynamics parameters have an implicit dependency through intermediate quantities such as altitude and Mach number. In addition to the launcher model described above, the present study considers the dynamical effects of TVC actuation and wind turbulence. Both effects are modelled as time-invariant transfer functions for the sake of simplicity without loss of generality. The TVC dynamics corresponds to a second-order system given by: G_TVC(s)=67.8^2s^2+90.9 s+67.8^2 where s represents the Laplace variable. The wind turbulence speed v_w is modelled by colouring a white noise signal through a first-order Dryden filter <cit.> given by: G_w(s)=3.54s+0.32 The launcher, TVC and wind models were put together in a simulator that allows to quickly analyse and compare several control systems, which is illustrated in Fig. <ref>. In this figure, different simulation rates are highlighted using different colours: black for the continuous-time dynamics, red for GNC computations (f_GNC=25 Hz, which is well representative of current onboard capabilities) and green for wind noise generation (f_w=20 Hz in this case). For control design purposes, it is also convenient to define a linear model that fully captures the driving dynamics of Eq. (<ref>) and (<ref>). To do so <cit.>, consider the following coefficients relative to the rotational motion: μ_α=l_α QSC_N_αJ, μ_c=l_cTJ, μ_n=m_nl_nl_c+J_nJ and to the translational motion: n_α=QSC_N_αm, n_c=Tm, n_n=m_nl_nm Using these coefficients, the transfer functions β(s)→θ(s) and β(s)→ w(s) correspond to the solutions of the system: [ s^2+l_αμ_αVs-μ_α -μ_αV; -l_αn_αVs+n_α+gsinθ_0 s+n_αV ][ θ (s)β (s); w (s)β (s) ]=- [ μ_ns^2+μ_c; n_ns^2+n_c ] Furthermore, as a first approximation for attitude control design purposes, drift and TWD dynamics can be neglected and the transfer function β(s)→θ(s) simplifies into: θ(s)β(s)≈ -μ_cs^2+l_αμ_αVs-μ_α § LAUNCHER CONTROL DESIGN USING ANGULAR ACCELERATION FEEDBACK This section describes and justifies the four attitude control systems developed in this study. §.§ Scheduled PD controller The baseline controller for this study is a classic proportional-derivative (PD) controller with the following structure: β(s)=k_P(θ_cmd(s)-θ(s))-k_D q(s)=k_P θ_cmd(s)-(k_P+s k_D)θ(s) Despite their simplicity, PD controllers represent the industrial state-of-practice for the vast majority of launch vehicles <cit.>. The gains k_P and k_D can be tuned using a multitude of methods. Here, they are selected based on pole placement of the closed-loop transfer function, which is obtained by substituting Eq. (<ref>) in (<ref>): θ(s)θ_cmd(s) = -μ_ck_Ps^2+(l_αμ_αV-μ_ck_D)s-(μ_α+μ_ck_P) It is clear from this equation that k_P and k_D can be chosen so as to enforce the desired natural frequency ω_θ and damping ratio ζ (here assumed constant throughout the flight for simplicity without loss of generality). It is also clear that this approach does not allow to specify the steady-state gain (when s→ 0) independently of the natural frequency as they both depend on k_P only. In order to handle the wide variation of the model's parameters during the flight, the two gains need to be scheduled throughout the trajectory. To do so, they are pre-computed for a grid of N=9 points (spaced every 10 seconds along the trajectory) as: k_P[i]=-1μ_c[i](μ_α[i]+ω_θ^2), k_D[i]=1μ_c[i](l_α[i]μ_α[i]V[i]-2 ζω_θ), i=1,...,N and then linearly interpolated online during the simulation. The robustness of this approach can be increased by scheduling the controller with respect to online measurements/estimates of some of the model's parameters. This is the underlying idea of LPV control <cit.>, which is outside the scope of this paper. §.§ INDI controller In this section, an INDI-based control law is developed and applied to regulate the launcher's attitude channel, i.e. to: y=h()=q where represents the state vector. In order to apply the INDI technique, this equation has to be time-differentiated until an explicit dependency on the TVC input appears. The first-order derivative corresponds to Eq. (<ref>), which can be recast as: ẏ=q̇=f()+g()u where f() is the control-independent part of the model, g() expresses the influence of the controls in the system and u is the control input. For the launcher scenario, the latter two terms correspond to: g()≈-μ_c, u=β A virtual control input can now be defined in order to transform the nonlinear system into a linear form as follows: ν=q̇=θ̈ ⇒ θ(s)ν(s)=1s^2 Following the procedure of Sec. <ref>, the command signal sent to the TVC actuator is given by: β=β_0-1μ_c(ν-q̇_0) where β_0 and q̇_0 are measurements/estimates of the TVC command and angular acceleration at the current computation step, respectively. The estimate of the TVC command β_0 is obtained with a low pass filter and because angular acceleration sensors are not common in launchers today, q̇_0 is estimated by passing the angular rate q through a derivative filter of the form: H_q̇(s)=s ω_q̇s+ω_q̇ where ω_q̇ represents the filter bandwidth. Note that, after the feedback linearisation of Eq. (<ref>), there are still some degrees of internal dynamics in the system related to the drift motion and TWD effect, but these dynamics are known to be stable and can be further handled by outer control loops. Using the virtual control and the linearised system of Eq. (<ref>), an outer PD control law is able to enforce the desired closed-loop response as follows: ν(s)=k_P(θ_cmd(s)-θ(s))-k_D q(s) ⇒ θ(s)θ_cmd(s) = k_Ps^2+k_D s+k_P k_P=ω^2_θ, k_D=2ζω_θ Note that, in contrast with the PD controller of Sec. <ref>, k_P and k_D do not need to be scheduled as they depend on ω_θ and ζ only, but a pre-computed grid of μ_c[i] is still required to perform the feedback linearisation. This is highlighted in the blue area of Fig. <ref>, which illustrates the implementation of the INDI controller in the simulator. Alternatively, μ_c could be estimated based on online measurements. §.§ Scheduled PD controller with q̇ feedback As explained in Sec. <ref>, the INDI controller of Eq. (<ref>) relies on q̇ information to reduce the impact of the launcher's model on the achievable control performance. For a fair comparison of controllers, it is then pertinent to consider a linear controller where q̇ feedback is also employed. In this case, the control law takes the form: β(s)=k_P(θ_cmd(s)-θ(s))-k_D q(s)-k_A q̇(s) where k_A is the acceleration feedback gain. Similar to Sec. <ref>, the three gains can be tuned via pole placement of the closed-loop transfer function, which is obtained by substituting Eq. (<ref>) in (<ref>): θ(s)θ_cmd(s) = -k_P1-μ_ck_A μ_cs^2+l_αμ_α/V-μ_ck_D1-μ_ck_As-μ_α+μ_ck_P1-μ_ck_A In contrast with the pure PD controller, the q̇ feedback allows to minimise tracking errors because the desired steady-state gain G_0 be specified independently of ω_θ through the proportional gain as follows: k_P[i]=μ_α[i]μ_c[i]G_01-G_0, i=1,...,N which is scheduled along a grid of N=9 points along the launcher's trajectory. The other two gains are then derived as a function of ω_θ and ζ as: k_A[i]=1μ_c[i](1+μ_α[i]+μ_c[i]k_P[i]ω_θ^2), k_D[i]=1μ_c[i](l_α[i]μ_α[i]V[i]-2 ζω_θ( 1-μ_c[i]k_A[i] )) For the estimation of q̇ in Eq. (<ref>), the same approach of Sec. <ref>, i.e. passing the angular rate q through the first-order derivative filter of Eq. (<ref>), was followed. In practice, it was verified that the performance of this controller is fairly sensitive to the filter bandwidth ω_q̇. This impact is illustrated in Fig. <ref>, which shows root-mean-square (RMS) values of pitch error (θ_err=θ_cmd-θ) vs. TVC rate (β̇) for a step command in θ_cmd using different controllers and nominal conditions. The blue line in Fig. <ref> shows results using the scheduled PD controller with q̇ feedback (FB) and varying values of the derivative filter bandwidth ω_q̇. Based on the results, the selection of ω_q̇ provides a key (and intuitive) tuning trade-off: increasing the bandwidth leads to smaller errors at the expense of more demanding TVC actuation, and vice-versa. A more favourable trade-off would likely be achieved by using a higher-order derivative filter, which is outside the scope of this paper. §.§ INDI controller with low-pass filter When applied to the pure INDI controller developed in Sec. <ref>, the same tuning trade-off analysis showed a much smaller sensitivity to ω_q̇ but unacceptably high TVC rates. To address this issue, the INDI controller was augmented with a low-pass filter at the output of the feedback linearisation loop, as depicted on the right-hand side of Fig. <ref>. The feedback linearisation loop, outer linear gains and q̇ estimation filter remain unchanged. The low-pass filter has bandwidth ω_β and a first-order structure as follows: H_β(s)=ω_βs+ω_β The purple line in Fig. <ref> shows the tuning trade-off using the INDI controller with low-pass filter and varying values of its bandwidth ω_β. Comparing with the PD controller with q̇ feedback (blue line), the two controllers show a similar trend (i.e. smaller errors and larger TVC rates for higher bandwidths), yet the INDI controller leads to smaller TVC rates for the same level of error. As before, a more favourable trade-off would likely be achieved by using a higher-order low-pass filter, but this is outside the scope of the paper. §.§ Control design summary The four controllers in Sec. <ref> to <ref> have been designed so as to enforce the same closed-loop properties throughout the flight. These are: * Natural frequency ω_θ=2.5 rad/s; * Damping ratio ζ=0.8; * Steady-state error of 5%, i.e. G_0=1.05, only applicable to Sec. <ref>. Furthermore, the bandwidth of the filters in Sec. <ref> and <ref>, ω_q̇ and ω_β, has been tuned so as to provide the same pitch error in nominal conditions, as highlighted in Fig. <ref>. The robust performance of these controllers will then be analysed in Sec. <ref>. Table <ref> provides an overview of each controller's dependency on the model parameters and sensor measurements/estimates. As anticipated, from the scheduled PD controller to the INDI controller, there is a progressive reduction of model dependency and increased use of sensor information. More specifically, the INDI controller relies on measurements/estimates of q̇ and β to fully circumvent the knowledge of the aerodynamics model. § TIME-DOMAIN ROBUST PERFORMANCE ANALYSIS This section analyses and compares the nonlinear time-domain performance the controllers developed in Sec. <ref>. Figure <ref> shows dispersed responses of the 2^8=256 corner-cases within the uncertainty level of Table <ref> when subjected to the same wind turbulence input v_w, modelled as described in Sec. <ref>. From the top to the bottom rows, the figure depicts the obtained pitch error θ_err, TVC deflection β and aerodynamic load indicator Qα along the trajectory. From left to right, the figure depicts results using the scheduled PD controller (Fig. <ref>a, in black), scheduled PD controller with q̇ feedback (Fig. <ref>b, in blue) and INDI controller with low-pass filter (Fig. <ref>c, in purple). The pure INDI controller (wihout low-pass filter) is not shown as it leads to unacceptably high TVC rates. From Fig. <ref>a to <ref>b, a reduction in the dispersion of all the indicators can be observed. This joint reduction clearly demonstrates the benefit of including q̇ feedback in the control design. The pitch error (and partially the Qα) is further reduced when using the INDI controller with low-pass filter, as depicted in Fig. <ref>c, at the expensive of higher TVC deflections (although still comparable to the pure PD controller). Note that Qα minimisation was not a specific control design objective in this case, but comes as a direct consequence of smaller pitch and drift errors, as indicated in Eq. (<ref>). In order to more clearly visualise these trends, Fig. <ref>a shows the wind response results using the same RMS θ_err vs. β̇ plot of Fig. <ref>. Each point in Fig. <ref>a corresponds to a single simulation from Fig. <ref>. As anticipated, the pure PD controller (in black) provides the largest errors but the smallest TVC rates while, on the other hand, the pure INDI controller (in red) provides the smallest errors but the largest TVC rates. The PD controller with q̇ feedback (in blue) and the INDI controller with low-pass filter (in purple) lie in-between the two extremes, with the latter controller performing better than the former (i.e. with slightly smaller errors and TVC rates) but only marginally. In order to complement the analysis, Fig. <ref>b shows the same type of results for a step command in θ_cmd. As before, the pure PD controller (in black) leads by far to the largest errors and the pure INDI controller (in red) to the largest TVC rates. Performance in terms of error and TVC rate improves using either the PD controller with q̇ feedback or the INDI controller with low-pass filter, and the difference between these two controllers is now more evident than for the wind responses. In nominal conditions, it was known from Fig. <ref> that, for the same error, the INDI controller with low-pass filter (in purple) provides a smaller TVC rate than the PD controller with q̇ feedback (in blue). Nonetheless, Fig. <ref>a shows that the former controller performs better also in terms of error, having a range of dispersion that is approximately four times smaller. The smaller error dispersion of the INDI controller with low-pass filter comes at the expense of a larger TVC rate dispersion, but its maximum value remains significantly lower than that of the PD controller with q̇ feedback. INDI-based controllers, by relying on angular acceleration and control input measurements/estimates, are known to be more sensitive to sensor noise and actuator delays than classical linear controllers. In order to assess this sensitivity, Fig. <ref> extends Fig. <ref>a using the INDI controller with low-pass filter, showing wind simulation results with different combinations of: * Gaussian noise on the angular rate signal, with 3σ={0, 0.05, 0.1} deg/s, which affects the estimates of both q and q̇ through Eq. (<ref>); * Time delay of {0, 40, 80} ms on the signal commanded to the TVC actuator, corresponding to a delay of {0, 1, 2} control samples. From Fig. <ref>, it can be observed that, for the ranges considered, delays on the TVC signal have very little impact on the controller's performance. Noise on the angular rate signal, on the other hand, leads to a more noticeable degradation, with the resulting TVC rates increasing approximately linearly with the noise variance. This type of understanding is therefore critical when designing and sizing INDI-based GNC software and hardware. The impact of angular rate noise would likely be minimised by using a higher-order derivative filter H_q̇(s) or by including an angular acceleration sensor in the GNC system. § FREQUENCY-DOMAIN ROBUST STABILITY ANALYSIS Because of the nonlinear nature of INDI, attaining an analytical proof of stability of INDI-based controllers <cit.> is much less trivial than for classical linear controllers. In order to mitigate this shortcoming, this section introduces a simple yet insightful frequency-domain approach to quantify stability degradation related to an imperfect feedback linearisation and to deviations from the control tuning conditions. This section is therefore focused on the controller developed in Sec. <ref>, not on a full comparison of controllers. The proposed approach is based on linearised models of the nonlinear launcher simulator with the INDI control law in the loop at different flight conditions and on the fact that, for a perfect feedback linearisation, the channel ν(s)→θ(s) behaves as a double integrator (recall Eq. (<ref>)). The INDI controller design was carried out under this assumption. Linearised models of ν(s)→θ(s) can be obtained thanks to MATLAB® routine: where is the Simulink® file instantiated with a certain configuration and t is the flight time instant. The analysis in this section considers the 2^8=256 corner-cases (within the uncertainty level of Table <ref>) and 33 instants (spaced every 2.5 seconds along the trajectory). Figure <ref>a shows the frequency response of the aforementioned linearised models (in blue), together with the "perfect" double integrator assumption (in red). This figure shows two important features: * A mismatch between the linearised models and the double integrator assumption, which grows with the frequency and arises from the fact that drift motion, TWD effects, actuator dynamics and H_q̇(s) filter were neglected in the feedback linearisation; * A dispersion of the linearised models, which is caused by deviations from the control tuning conditions due to the uncertain and time-varying nature of the model's parameters. These models can be employed to assess the system's stability margins when the loop is closed using Eq. (<ref>). To do so, it is convenient to plot the responses in a Nichols chart, which is depicted in Fig. <ref>b. For a detailed explanation of the application of Nichols charts to launcher stability assessment, the reader is referred to <cit.>. The impact of the imperfect feedback linearisation on the system's stability becomes evident from Fig. <ref>b: the phase margin is reduced approximately by half and the system can be gain-destabilised, which is not the case under the double integrator assumption. Nonetheless, all phase and gain margins remain substantial. When this is not the case, the linearised models of ν(s)→θ(s) can be employed instead of Eq. (<ref>) to re-tune the INDI outer control law. The stability margins are naturally driven by the value of μ_c, which is the main dependency of the INDI controller (recall Table <ref>). Accordingly, the margins become smaller for smaller values of μ_c as the system's control effectiveness decreases, and vice-versa. In order to assess the degradation caused by uncertainties and time variations, the phase and gain margins are plotted as a function of time in Fig. <ref>a and b, respectively. These figures show the nominal margins (in continuous line), the worst (minimum) corner-case margins with the uncertainty level of Table <ref> (Δ=100%, in dash-dotted line) and the worst corner-case margins with twice the uncertainty level (Δ=200%, in dotted line). The N=9 control tuning points, i.e. the interpolation nodes of μ_c, are indicated in the figures using circular marks. The main results are then summarised in Table <ref>. From Fig. <ref>, it can be seen that, at the control tuning points, nominal phase and gain margins are constant throughout the flight. This is expected because the closed-loop of Eq. (<ref>) is time-invariant. Between tuning points there is naturally a variation in margins due to mismatches between actual and interpolated values of μ_c. Nonetheless, this variation is extremely limited and leads to a degradation of only 1 deg and 0.14 dB. Stability degradation due to uncertainties is about one order of magnitude higher, leading to margin losses of 8.3 deg and 1.3 dB. In practice, the resulting stability margins must provide enough room to accommodate the impact of dynamical effects that were not considered in this study, such as flexible modes and non-collocated sensing. Nonetheless, the worst-case margins are plentiful, which suggests the feasibility of INDI-based launcher attitude control. In fact, the worst-case values remain acceptable even when the assumed level of uncertainty is doubled (Δ=200%, shown only in Fig. <ref>, not in Table <ref> for the sake of conciseness). § CONCLUSIONS In conclusion, this paper presented a feasibility study of Incremental Nonlinear Dynamic Inversion (INDI) applied to a launcher ascent flight control scenario and highlighted its potential benefits over the traditional ad hoc linear control approach widely studied and implemented in practice. The paper introduced the INDI technique which mainly cancels the nonlinearities of a (nonlinear) system by means of state/output feedback and transforms it into a linear form, making it suitable to be controlled by a single linear control law without the need for gain-scheduling or other nonlinear approach (sliding mode, etc.). The paper also discussed the challenges associated with INDI-based control, such as sensitivity to sensor noise and actuator delay, and the difficulty of obtaining an analytical proof of stability. However, the potential benefits of INDI-based control outweighed these challenges as it was shown in a comprehensive nonlinear simulation campaign which considered wind disturbances and parameter uncertainties. Finally, the paper proposed a simple, yet insightful linearisation-based approach to evaluate stability degradation and deviations from the (nominal) control tuning conditions. The results obtained in this study suggest that the INDI-based control approach could bring relevant improvements to launcher GNC, which may facilitate the transition to data-driven methods in the future. Outlook of this work will be furthering the analysis in terms of limits of performance (worst-case analysis) as well as addressing the impact of flexible modes and non-collocated sensing. § ACKNOWLEDGEMENTS The authors would like to thank Mr. Massimo Casasco, head of ESA's GNC section, for making this feasibility study possible. IEEEtran
http://arxiv.org/abs/2307.01727v1
20230704135404
Mutual Information Analysis for Factor Graph-based MIMO Iterative Detections through Error Functions
[ "Huan Li", "Jingxuan Huang", "Zesong Fei" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
remarkRemark corollaryCorollary *proofProof: lemmaLemma theoremTheorem 1 breakablealgorithm algorithm height.8pt depth0pt 2pt 2pt Mutual Information Analysis for Factor Graph-based MIMO Iterative Detections through Error Functions Huan Li, Jingxuan Huang, and Zesong Fei This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. H. Li, J. Huang and Z. Fei are with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China (e-mail: [email protected] and [email protected], [email protected], [email protected].). Received X, 2023; accepted X, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================= The factor graph (FG) based iterative detection is considered an effective and practical method for multiple-input and multiple-out (MIMO), particularly massive MIMO (m-MIMO) systems. However, the convergence analysis for the FG-based iterative MIMO detection is insufficient, which is of great significance to the performance evaluation and algorithm design of detection methods. This paper investigates the mutual information update flow for the FG-based iterative MIMO detection and proposes a precise mutual information computation mechanism with the aid of Gaussian approximation and error functions, i.e., the error functions-aided analysis (EF-AA) mechanism. Numerical results indicate that the theoretical result calculated by the EF-AA mechanism is completely consistent with the bit error rate performance of the FG-based iterative MIMO detection. Furthermore, the proposed EF-AA mechanism can reveal the exact convergent iteration number and convergent signal-to-ratio value of the FG-based iterative MIMO detection, representing the performance bound of the MIMO detection. Mutual information, convergence, error functions, Gaussian approximation, MIMO detection, factor graph § INTRODUCTION Ultra-high speed and ultra-reliable wireless transmissions are requested in the sixth-generation (6G) technology to provide ubiquitous high-performance connections <cit.>. Therefore, multiple-input and multiple-output (MIMO) technology <cit.> is significant to the 6G implementation, which can guarantee increased data throughput with accurate detection methods. Specifically, massive MIMO (m-MIMO) technology <cit.> can provide an extremely high data transmission rate due to more transceiver antennas and spatial diversity. However, most detection algorithms are not feasible for m-MIMO technology because of the inevitable high complexity <cit.>. To solve this problem, the authors in <cit.> proposed to utilize the factor graph (FG) model to transfer probability information between observation nodes (ONs) and variable nodes (VNs) to estimate accurate symbol probabilities. S. Wu et al. in <cit.> showed that FG-based iterative MIMO detection could achieve near-optimal performance compared to the optimal maximum likelihood detection <cit.>, with a complexity that is acceptable and quadratic to the number of transceiver antennas <cit.>. Consequently, the FG-based iterative MIMO detection can be considered a practical method for m-MIMO technology. Specifically, the performance of MIMO detections can be influenced by many factors, for example, the property of MIMO channels. Many prior arts have analyzed the MIMO channel through mutual information <cit.>. In <cit.>, O. Oyman et al. derived a tight lower-bounded analytical expression for a Gaussian MIMO frequency-selective spatially correlated fading channel with unknown channel state information (CSI), which approximated the variance of mutual information to a closed-form function. For the space-time independent and identically distributed (i.i.d.) MIMO channel, the authors in <cit.> proposed analytical expressions to present the distribution characters of mutual information between transmitted and received signal vectors. L. Musavian et al. proved in <cit.> that a tight gap between the upper and lower bounds of mutual information exists when uncorrelated transmit antennas with uniform power distribution and correlated receive antennas are considered. For the m-MIMO systems, P. Yang et al. have developed a message-passing-based algorithm to compute the mutual information where arranged finite-alphabet inputs <cit.>. Recently, random matrix theory has been utilized to attain the ergodic mutual information between the transmit signals and outputs of the Rayleigh channel <cit.>, which is quantized by a mixed analog-to-digital converters architecture. Based on the mutual information investigations of different MIMO channels, much literature has studied the performance of varieties of MIMO detections from the perspective of mutual information. For the serial detection schemes in vertical Bell labs layered space-time (V-BLAST) MIMO architecture <cit.>, S. Stiglmayr et al. calculated the mutual information of a single antenna stream without considering the correlation between the Gaussian noise and hard decision error <cit.>. In <cit.>, the authors proposed a numerical calculation of the mutual information to show the convergence of an iterative method, which utilized the stair matrix to achieve similar performance to linear minimum mean-square error (MMSE) detection. Particularly for the belief propagation-based iterative detections, <cit.> employed the extrinsic information transfer (EXIT) chart to present the validity and convergence of the iterative process. Then, we initially derived the closed-form mutual information update flow for the FG-based iterative MIMO detection according to the EXIT analysis in <cit.>. Nevertheless, the proposed mutual information calculations in <cit.> ignored the specific probability when computing the mutual information between VN and the transmitted symbol. The existing approximation of mutual information causes inaccuracy in evaluating the convergence and bit error rate (BER) performance. Therefore, we derive the closed-form expressions of assessing the convergence and BER performance of the FG-based iterative MIMO detection, which is more feasible for the MIMO, especially the m-MIMO system. In this paper, we proposed a more precise calculation method for the mutual information update flow of the FG-based iterative MIMO detection under both binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) modulations, where the error function 𝑒𝑟𝑓(·) and the complementary error function 𝑒𝑟𝑓𝑐(·) are utilized. Our proposed error functions-aided mechanism can provide exact mutual information curves at different SNRs in both MIMO and m-MIMO systems, which are of great significance to theoretical bound evaluation and the detection algorithm designs. Noted that this work is distinct from the aforementioned studies <cit.>, which focuses on precisely investigating the performance of FG-based iterative MIMO detections, instead of MIMO channel properties. The rest of this paper is organized as follows. In Section II, the system model and fundamental knowledge of FG-based iterative MIMO detections are introduced. In Section III, we derived the proposed error functions-aided mutual information calculation mechanism for FG-based iterative MIMO detections. The numerical results of mutual information analysis and the BER performance are presented in Section IV. Finally, we conclude this paper in Section V. § PRELIMINARIES §.§ Channel Model In this paper, we consider a MIMO system equipped with numerous antennas at both the transmit and receive sides, represented by N_T and N_R, respectively. Note that the number of antennas can range from 2 to hundreds. Specifically, the received signal vector y∈ℂ^N_R× 1 can be given as y = Hx + n, where x=[x_1,x_2,⋯,x_N_T] ∈ℂ^N_T× 1 denotes the power-normalized transmitted signal vector, n = [n_1,n_2,⋯,n_N_R] ∈ℂ^N_R× 1 denotes the additive complex-valued Gaussian white noise, which can be represented as n_i∼CN(0,σ_n^2). The channel matrix H∈ℂ^N_R×N_T reflects the Rayleigh fading effects and can be expressed as H = [ [ h_1,1 ⋯ h_1,N_T; ⋮ ⋱ ⋮; h_N_R,1 ⋯ h_N_R,N_T ]] , where entries follow the complex-valued Gaussian distribution with zero mean and unit variance. Specifically, the V-BLAST MIMO structure <cit.> is adopted at the transmit and receive sides, shown as Fig. <ref>. To evaluate the performance of FG-based iterative MIMO detections more precisely, the averaged received SNR ρ_r is given as ρ_r =𝔼{∑_i = 1,j = 1^i = N_R,j = N_T|h_i,j|^2/N_Tσ _n^2}. §.§ An Overview of the FG-based Iterative MIMO Detection The FG-based MIMO detection is a kind of message-passing algorithm, which transmits a posteriori probability information between ONs o_i (i=1,2,⋯,N_R) and VNs v_l (l=1,2,⋯,N_T). The specific information transfer flow in FG is given in Fig. <ref>. Shown as in Fig. <ref>, the FG-based iterative MIMO detection firstly transmit the probability information p_o→ v from ON to VN, which is given as p_o→ v = ∏_v' ∈ V(o)\ vp_v' → o, where V(o)\ v denotes the collection of VN connected to ON except VN v. Then the probability information p_v → o transferred from VN to ON can be expressed as p_v→ o = ∏_o' ∈ O(v)\ op_o' → v, where O(v)\ o represents the collection of ON linked to VN except ON o. To conclude, the FG-based iterative MIMO detection follows the bidirectional-transmission mechanism until the a posteriori probability information stays unchanged, which is defined as convergence <cit.>. § ERROR FUNCTIONS-AIDED ANALYSIS MECHANISM FOR FG-BASED ITERATIVE MIMO DETECTIONS In our previous work <cit.>, we introduced an innovative EXIT analysis method with the ability to evaluate mutual information through low-complexity calculations. Although the earlier EXIT analysis was able to present the convergence of iterative MIMO detections, it was not accurate enough. In this paper, we propose a new method that can generate mutual information curves of iterative MIMO detections more precisely. The information transfer flow of FG-based iterative MIMO detections can be demonstrated in Fig. <ref>, where the iterative MIMO detector is abstracted to be composed of the ON sub-detector and VN sub-detector. Specifically, the ON sub-detector concludes two components, i.e., the extrinsic information calculator (EIC) and the apriori information calculator (AIC). As shown in Fig. <ref>, the AIC firstly updates the information transferred to it, utilizing channel information h_i,l, n_i (i=1,2,⋯,N_R, l=1,2,⋯,N_T) transformed by VN sub-detector. Hence, the output information 𝑣𝑎𝑟_ω_𝑖^𝑙 of AIC can be expressed as 𝑣𝑎𝑟_ω_𝑖^𝑙 = 𝐹_𝐴𝐼𝐶(c_i,l,ψ_i^l), for i =1,2,⋯,N_R, and j = 1,2,⋯,N_T, where 𝐹_𝐴𝐼𝐶(·) denotes the transfer function of AIC. Another component of the ON sub-detector is EIC, which calculates the mutual information I_ω_i^l between transmit symbol x_l and LLR ω_i^l, and then transfer the mutual information I_ω_i^l to VN sub-detector. The mutual information I_ω_i^l can be represented by (<ref>), where p_ω(ω _i^l|x_l) denotes the conditional probability density function (CPDF) of the LLR ω_i^l at ON o_i. In addition, 𝐹_ω(·) denotes the transfer function of VN-detector, which also indicates that the CPDF p_ω(ω _i^l|x_l) is relevant to the output information of AIC 𝑣𝑎𝑟_ω_𝑖^𝑙. For the VN sub-detector, mutual information I_ψ_i^l between transmit symbol x_l and LLR ψ_i^l is related to the output information of EIC ω_i^l, and can be expressed by <ref>, which is then transferred to ON sub-detector. In (<ref>), p_ψ(ψ _i^l|x_l) represents the CPDF of the LLR ψ_i^l at VN v_l. Similarly, 𝐹_ψ( ·) denotes the transfer function of VN sub-detector, and the CPDF p_ψ(ψ _i^l|x_l) is related to the output information of EIC ω_i^l. Furthermore, the output information L_l=∑_i'=1^N_Rω_i'^l of the iterative MIMO detection is also relevant to the output information of EIC ω_i^l, then the mutual information of which can be given as I_L_l = 𝐹_𝐿(𝑣𝑎𝑟_ω_𝑖^𝑙), where 𝐹_𝐿( ·) represents the transfer function of iterative MIMO detector. §.§ The Error Functions-Aided Analysis under BPSK Modulation This paper first derives the analysis mechanism for FG-based iterative MIMO detections under BPSK modulation. The previous work <cit.> has derived the approximate mutual information of the ON sub-detector and VN sub-detector. Although the analysis in <cit.> is imperfect, it still provides some significant derivation for our current work, which are concluded as Lemma <ref> to Lemma <ref> as follows. I_ω_i^lΔ = 𝐹_ω(𝑣𝑎𝑟_ω_𝑖^𝑙)Δ = J(√(𝑣𝑎𝑟_ω_𝑖^𝑙)), 𝑣𝑎𝑟_ω_𝑖^𝑙 = 4h_i,l^2/σ _g_il^2, . p_ψ(ψ _i^l|x_l) = p_ψ(ψ _i^l|h_i,l,x_l) = 1/√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)exp( - ( ψ _i^l - μ_ψ_i^l)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙), 𝑣𝑎𝑟_ψ_𝑖^𝑙= ∑_i'=1,i' i^N_R𝑣𝑎𝑟_ω_𝑖'^𝑙 = ∑_i'=1,i' i^N_RJ^-1(I_ω_i^l'), μ_ψ_i^l = ±𝑣𝑎𝑟_ψ_𝑖^𝑙/2 = ∑_i'=1,i' i^N_R±𝑣𝑎𝑟_ω_𝑖'^𝑙/2. I_ψ _i^lΔ = 𝐹_ψ(𝑣𝑎𝑟_ψ_𝑖^𝑙)= J( √(𝑣𝑎𝑟_ψ_𝑖^𝑙)). In what follows, we derive the mutual information I_ω_i^l, I_ψ _i^l and I_L_l according to Fig. <ref> and the aforementioned Lemmas, respectively. It is noted that the mutual information I_ω_i^l calculated by EIC at ON sub-detector is given in Lemma <ref> as (<ref>), with parameter 𝑣𝑎𝑟_ω_𝑖^𝑙 as the input. Furthermore, the mutual information I_ψ _i^l computed at VN sub-detector is defined in Lemma <ref> as (<ref>), and the input parameter 𝑣𝑎𝑟_ψ_𝑖^𝑙 can be obtained through the summation of 𝑣𝑎𝑟_ω_𝑖^𝑙. Therefore, in this section, we mainly focus on the computation of the variance 𝑣𝑎𝑟_ω_𝑖^𝑙 at the AIC, i.e., the function 𝐹_𝐴𝐼𝐶(·) in (<ref>). At AIC, the received symbol y_i can be separated into two parts, namely the symbol from the corresponding transmit antenna and interference, which is given as y_i = ∑_l = 1^N_Th_i,lx_l + n_i = h_i,lx_l + ∑_l' = 1,l' l^N_Th_i,l'x_l' + n_i_g_il, where the signals from other antennas plus channel noise are approximated to Gaussian random variables as g_il∼ N(μ _g_il,σ _g_il^2). Specifically, the mean μ _g_il and variance σ _g_il^2 can be computed as μ _g_il = ∑_l' = 1,l' l^N_Th_i,l'𝔼(x_l') , σ _g_il^2 = ∑_l' = 1,l' l^N_T| h_i,l'| ^2𝕍(x_l')+σ_n^2, in which 𝔼(x_l')=∑_j = 1^2 P_i^l'(θ_j)·θ_j, and 𝕍(x_l')=∑_j = 1^2 P_i^l'(θ_j)·θ_j^2-| 𝔼(x_l')|^2, are the mean and variance of x_l' respectively, in which P_i^l'(θ_j) represents the probability of x_l'=θ_j∈{± 1} estimated at VN v_l'. From Lemma <ref>, we get to know that the ECV 𝑣𝑎𝑟_ω_𝑖^𝑙 is essential to the calculation of mutual information I_ω. Formula (<ref>) also indicates that the critical part of ECV 𝑣𝑎𝑟_ω_𝑖^𝑙 is the variance σ_g_il. Furthermore, according to (<ref>) and (<ref>), the calculation of probability P_i^l'(θ_j) at VN v_l is indiepensable for the computation of variance σ_g_il, as well as the ECV 𝑣𝑎𝑟_ω_𝑖^𝑙. Specifically, the P_i^l(θ_j) at VN v_l can be initialized as an equal probability in the first iteration during the iterative MIMO detections. Therefore, in the first iteration, we have 𝕍(x_l)= 1/2( 1 ^2+( -1) ^2)-| 1/2( 1-1) |^2 = 1. However, the variance 𝕍(x_l) remains uncertain in the subsequent iterations, for the reason that P_i^l'(θ_j) is changeable with the LLR ω_i^l transferred from the ON sub-detector. Consequently, it is crucial to derive the probability P_i^l'(θ_j) at VN v_l in the subsequent iterations. Before that, we propose the following Theorem <ref> as the basis of our further analysis. P(α = 1) = ∫_0^∞p_L( L_α)dL_α. P(α = 0) = ∫_-∞^0 p_L( L_α)dL_α. α = {[ 1, if L_α > 0; 0, if L_α < 0 ], . P(L_α > 0) = ∫_0^∞p_L( L_α)dL_α, P(L_α < 0) = ∫_-∞^0 p_L( L_α)dL_α. {[ P(α = 1)∝ P(L_α > 0); P(α = 0)∝ P(L_α < 0) ]. . In Lemma <ref>, the CPDF of extrinsic LLR ψ_i^l at VN v_l is given as a Gaussian PDF with known mean and variance. Then we can derive the probability P_i^l'(θ_j) at VN v_l in the subsequent iterations according to Theorem <ref> and the CPDF (<ref>) in Lemma <ref>. P_i^l( + 1 ) = P_i^l(θ_j = + 1 ) = {[ 1/2[1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))] ,if x_l = + 1 ,; 1/2𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)),if x_l = - 1, ]. P_i^l(- 1) = P_i^l(θ_j = - 1 ) = {[ 1/2[1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))],if x_l = - 1 ,; 1/2𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)),if x_l = + 1, ]. 𝑒𝑟𝑓(x) = 2/√(π)∫_0^x exp( - t^2) dt, 𝑒𝑟𝑓𝑐(x) = 1 - 𝑒𝑟𝑓(x) = 2/√(π)∫_x^∞exp( - t^2)dt. p_ψ(ψ _i^l|h_i,l,x_l) = p_ψ(ψ _i^l|x_l=∓ 1) = 1/√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)exp( - ( ψ _i^l ±𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙), Theorem <ref> presents the expression of probability P_i^l(θ_j), based on which we can further deduct the ECV 𝑣𝑎𝑟_ω_𝑖^𝑙 calculated at AIC. 𝑣𝑎𝑟_ω_𝑖^𝑙 = 2h_i,l^2/∑_l' = 1,l' l^N_T| h_i,l'|^2 𝕍(x_l')+σ _n^2, 𝕍(x_l) = {[ 1, t = 1,; [1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), t>1, ]. C1: 𝔼(x_l^+ ) = +1/2 [1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))] -1/2 𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) = 𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) . 𝕍(x_l^+) = ( 1) ^2P_i^l(1)+( -1) ^2P_i^l(-1)-| 𝔼(x_l)|^2 = [P_i^l(1)+P_i^l(-1)] -[𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]^2 = 1-𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))^2 = [ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))][ 1-𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))] = [ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)). C2: 𝔼(x_l^- ) = -1/2 [1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]+1/2 𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) = 1/2 [𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))-𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))-1] = -𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) . 𝕍(x_l^-) = 1 -[ 𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]^2 = [ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)). Consequently, given the variance 𝑣𝑎𝑟_ω_𝑖^𝑙 at AIC as (<ref>) and (<ref>) in Theorem <ref>, the mutual information I_ω_i^l can be calculated according to (<ref>) at the EIC (i.e., ON sub-detector). Then the mutual information I_ψ_i^l at the VN sub-detector can be computed as (<ref>). In addition, the mutual information I_L_l between the LLR L_l output by the VN sub-detector and transmit symbol x_l is given as I_L_lΔ = 𝐹_𝐿(𝑣𝑎𝑟_ω_𝑖^𝑙)= J( √(∑_i'=1^N_R𝑣𝑎𝑟_ω_𝑖'^𝑙)), then the averaged mutual information (AMI) of the VNs I_L can be expressed as I_L = 1/N_T∑_l=1^N_TI_L_l. To sum up, the error functions-aided analysis (EF-AA) mechanism for FG-based iterative MIMO detections under the BPSK modulation can be summarized as Algorithm <ref>. §.§ The Error Functions-Aided Analysis under QPSK Modulation According to <cit.>, the complex-domain Rayleigh fading MIMO channel (<ref>) can be converted into an equivalent real-domain form, which extracts the real and imaginary part of each entry in (<ref>), respectively. The conversion can be presented by H_R = [ [ (H) - (H); (H) - (H) ]], where (·) and (·) denote the real part and imaginary part of elements, respectively. Then the corresponding vectors in (<ref>) can be transformed by y_R = [ [ (y); (y) ]], x_R = [ [ (x); (x) ]], and n_R = [ [ (n); (n) ]], where each element corresponds to the information bit before QPSK modulation. Therefore, through the aforementioned real-domain conversion, this paper extends the BPSK-modulated EF-AA mechanism to QPSK-modulated scenarios. In the following, the subscripts of matrix H_R and vectors x_R, y_R, n_R is omitted to better illustrate the derivation process. Specifically, the complex QPSK symbol x_l = θ∈1/√(2){1+1i,1-1i,-1+1i,-1-1i} can be transfomred into the real form x_l= θ_R ∈1/√(2){1,-1}. Then the Gaussian approximation of the received symbol y_i is revised as y_i = ∑_l = 1^2N_Th_i,lx_l + n_i = h_i,lx_l + ∑_l' = 1,l' l^2N_Th_i,l'x_l' + n_i_g_il, correspondingly, the variance 𝕍(x_l) in the first iteration can be given as 𝕍(x_l)= 1/2[( 1/√(2))^2+( -1/√(2))^2]-| 1/2( 1/√(2)-1/√(2)) |^2 = 1/2. Under the QPSK modulation, the ECV 𝑣𝑎𝑟_ω_𝑖^𝑙 defined in Lemma <ref> and variance 𝑣𝑎𝑟_ψ_𝑖^𝑙 in Lemma <ref> are computed as 𝑣𝑎𝑟_ω_𝑖^𝑙 = 2h_i,l^2/σ _g_il^2, and 𝑣𝑎𝑟_ψ_𝑖^𝑙=∑_i'=1,i' i^2N_RJ^-1(I_ω_i^l'), respectively. In addition, (<ref>) and (<ref>) in Theorem <ref> need to be modified as P_i^l( + 1/√(2) ) = P_i^l(θ_j = + 1/√(2) ) = {[ 1/2[ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))] ,if x_l = + 1/√(2) ,; 1/2𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)),if x_l = - 1/√(2), ]. P_i^l(- 1/√(2)) = P_i^l(θ_j = - 1/√(2) ) = {[ 1/2[ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))],if x_l = - 1/√(2) ,; 1/2𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)),if x_l = + 1/√(2), ]. Therefore, the ECV 𝑣𝑎𝑟_ω_𝑖^𝑙 under the QPSK-modulated scenario can be calculated as in Theorem <ref>. 𝑣𝑎𝑟_ω_𝑖^𝑙 = 4h_i,l^2/∑_l' = 1,l' l^2N_T| h_i,l'|^2 [1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))+2σ _n^2. C1: 𝔼(x_l^+ ) = +1/2√(2)[1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))] -1/2√(2)𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) = 1/√(2)𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), 𝕍(x_l^+) = ( 1/√(2)) ^2P_i^l(1/√(2))+( -1/√(2)) ^2P_i^l(-1/√(2))-| 𝔼(x_l)|^2 = 1/2[P_i^l(-1/√(2))+P_i^l(1/√(2))] -1/ 2 [𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]^2 = 1/2[ 1-𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))^2] = 1/2[ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), C2: 𝔼(x_l^- ) = -1/2√(2)[1+𝑒𝑟𝑓(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]+1/2√(2)𝑒𝑟𝑓𝑐(√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)) = -1/√(2)𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), 𝕍(x_l^-) = 1/2-1/2[ 𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]^2 = 1/2[ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), 𝕍(x_l) =1/2[ 1+𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))]𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)). Referring to (<ref>) and Theorem <ref>, the EF-AA mechanism under QPSK modulation is concluded in Algorithm <ref>. § NUMERICAL RESULTS In this section, we utilize the proposed EF-AA mechanism to analyze the performance of FG-based iterative MIMO detections. Specifically, this section assesses the precision of the EF-AA mechanism from two aspects, i.e., the convergence under different iterations (by plotting AMI versus I_ter in Fig. <ref>) and the convergence under different received SNRs (by plotting AMI versus ρ_r in Fig. <ref>), respectively. More specifically, to verify the effectiveness of the proposed mutual information analysis method, the BER performance of the FG-based iterative MIMO detection under different iteration times and different received SNRs are presented in Fig. <ref> and Fig. <ref>, respectively. In addition, the antennas at transmit and receive sides are equipped as N_T=N_R=4,16 for MIMO scenarios and N_T=N_R=128,256 for m-MIMO scenarios. In this section, the FG-BP iterative MIMO detection in <cit.> and QPSK modulation are adopted. Fig. <ref> and Fig. <ref> show that the proposed EF-AA mechanism can evaluate the convergence of FG-based iterative MIMO detections accurately. For example, Fig. <ref> reveals that when the iteration number reaches 5, the AMI for both N_T=N_R=4 and N_T=N_R=16 approaches 1 identically. Moreover, the convergence performance of N_T=N_R=4 and N_T=N_R=16 in Fig. <ref> are consistent with the BER performance in Fig. <ref>, which tend to be steady after 5th iteration. Likewise, Fig. <ref> and Fig. <ref> present identical convergence characteristics in AMI and BER performance. Additionally, Fig. <ref> is depicted to better demonstrate the evaluation accuracy of the proposed EF-AA mechanism on the convergence performance for the FG-based iterative MIMO detection. The left axis of Fig. <ref> exhibits the BER performance, and the right axis displays the characteristic of AMI. It can be seen from Fig. <ref> that with the increase of iteration number I_ter, the decline of I_L (corresponding to AMI performance) and the growth of P_e curve (corresponding to BER performance) are exactly homologous. Specifically, both the I_L curve and P_e curve tend to keep stable after the 16th iteration. Fig. <ref> and Fig. <ref> compare the AMI computed by the EF-AA mechanism and the BER performance of the FG-based iterative MIMO detection <cit.> under different antenna types of equipment. It can be seen that as the growth of antennas, the AMI curve (in Fig. <ref>) rises faster, and the BER curve (in Fig. <ref>) drops more dramatically. Additionally, Fig. <ref> and Fig. <ref> also exhibit the effectiveness of the proposed EF-AA mechanism. From Fig. <ref>, the convergence SNR under different antennas can be concluded as 4.5,7,14,17 dB (from left to right) respectively. Then Fig. <ref> shows that under antenna N_T=N_R=256,128, when the received SNR ρ_r reaches the convergence SNR in Fig. <ref>, the corresponding BERs tend to 0, which indicates the convergence of the FG-based iterative MIMO detection under m-MIMO scenarios. Then under antenna N_T=N_R=16,4, when the received SNR ρ_r reaches the convergence SNR in Fig. <ref>, the corresponding BERs tend to be steady, which indicates the convergence of the FG-based iterative MIMO detection. § CONCLUSIONS This paper investigated the mutual information update flow of the FG-based iterative MIMO detection, particularly focusing on the convergence performance under different iteration numbers and SNRs. In this paper, the EF-AA mechanism was proposed to provide exact mutual information curves through Gaussian approximation and closed-form calculation under BPSK and QPSK modulations. Numerical results of AMI and BER performance demonstrate that for the MIMO and m-MIMO scenarios, the proposed EF-AA mechanism can evaluate the convergence characteristic of FG-based iterative MIMO detections in both precise iteration times and received SNRs. It can be foreseen that the proposed EF-AA mechanism might have significant impacts both on performance evaluation and algorithm optimization. § CURVE FITTING FUNCTION J(·) As defined in <cit.>, the mutual information I_γ between symbol s and LLR γ is given as I_γ = I(s;γ) = ∑_j=1^q∫_ - ∞^∞p( s = θ _j)p( γ |s = θ _j)log_2p( γ |s = θ _j)/∑_j = 1^q p( γ ,s = θ _j)dγ, where p( s = θ _j) denotes the a prior probability of the symbol s, p( γ | s) denotes the conditional a posterior probability of LLR γ, and p( γ , s) denotes the joint a posterior probability of LLR γ and symbol s. Specifically, if the LLR γ follows the Gaussian distribution with mean of σ ^2/2 and variance of σ ^2, then (<ref>) can be expressed through the approximate curve fitting function J(·), which can be given as follows J(σ)≈{[ a_J,1σ^3+b_J,1σ^2+e_J,1σ^, 0≤σ≤σ^*,; 1-e^(a_J,2σ^3+b_J,2σ^2+e_J,2σ+d_J,2),σ^*<σ<10,; 1, σ≥10, ]. [ σ^*=1.6363, a_J,1=-0.0421061, b_J,1=0.209252,; e_J,1=-0.00640081, a_J,2=0.00181491, b_J,2=-0.142675,; e_J,2=-0.0822054, d_J,2=0.0549608. ] Furthermore, the inverse function of J( ·) is represented as J^-1(I_σ)≈{[ a_σ ,1I_σ^2+b_σ ,1I_σ+e_σ,1√(I_σ), 0≤I_σ≤I^*_σ,; -a_σ ,2ln[b_σ,2(1-I_σ)]-e_σ,2I_σ, I^*_σ<I_σ<1, ]. where [ I^*_σ=0.3646, a_σ ,1=1.09542, b_σ ,1=0.214217,; e_σ ,1=2.33727, a_σ ,2=0.706692, b_σ ,2=0.386013,; e_σ ,2=-1.75017. ] § DETAILED DERIVATION OF RIGHT AND WRONG DECISION PROBABILITIES We show the details of the derivation of the equation of (<ref>.b) and (<ref>.b) in this appendix. Firstly transform the first exponential term in equation (<ref>.a) as ( ψ _i^l - 𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙 = t^2, thus we have t = ( 𝑣𝑎𝑟_ψ_𝑖^𝑙/2-ψ _i^l )/√(2𝑣𝑎𝑟_ψ_𝑖^𝑙), then we have 1/2√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_0^∞exp( - ( ψ _i^l - 𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙) dψ _i^l = 1/2√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)×√(2𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_ - ∞^√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)exp( - t^2) dt = 1/4( 2/√(π)∫_0^√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)exp( - t^2) dt +2/√(π)∫_-∞^0exp( - t^2) dt) (c)=1/4( 2/√(π)∫_0^√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)exp( - t^2) dt +2/√(π)∫_0^∞exp( - t^2) dt) = 1/4[1 + 𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))], where dψ _i^l = -√(2𝑣𝑎𝑟_ψ_𝑖^𝑙) dt, and the sub-equation (<ref>.c) holds because exp(-t^2) is an even function. Similarly, the second term of equation (<ref>.a) can be given as 1/2√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_ - ∞^0 exp( - ( ψ _i^l + 𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙) dψ _i^l = 2/4√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)×√(2𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_ - ∞^√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)exp( - t^2) dt = 1/4[1 + 𝑒𝑟𝑓( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8))], where t = ( ψ _i^l+𝑣𝑎𝑟_ψ_𝑖^𝑙/2)/√(2𝑣𝑎𝑟_ψ_𝑖^𝑙), and dψ _i^l = √(2𝑣𝑎𝑟_ψ_𝑖^𝑙) dt. Follow the same principle of (<ref>) and (<ref>), we can deduct the terms of (<ref>.a) as 1/2√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_0^∞exp( - ( ψ _i^l + 𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙) dψ _i^l = 2/4√(π)∫_√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)^∞exp( - t^2) dt = 1/4𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), with t = ( ψ _i^l+𝑣𝑎𝑟_ψ_𝑖^𝑙/2)/√(2𝑣𝑎𝑟_ψ_𝑖^𝑙), and 1/2√(2π𝑣𝑎𝑟_ψ_𝑖^𝑙)∫_ - ∞^0 exp( - ( ψ _i^l - 𝑣𝑎𝑟_ψ_𝑖^𝑙/2)^2/2𝑣𝑎𝑟_ψ_𝑖^𝑙) dψ _i^l = 2/4√(π)∫_√(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)^∞exp( - t^2) dt = 1/4𝑒𝑟𝑓𝑐( √(𝑣𝑎𝑟_ψ_𝑖^𝑙/8)), with t = ( 𝑣𝑎𝑟_ψ_𝑖^𝑙/2-ψ _i^l)/√(2𝑣𝑎𝑟_ψ_𝑖^𝑙). 1 1:6G W. Tong and P. Zhu, Eds., 6G: The Next Horizon: From Connected People and Things to Connected Intelligence. Cambridge: Cambridge University Press, 2021. 2:MIMO D. Gesbert, M. Shafi, Da-shan Shiu, P. J. Smith and A. Naguib, “From theory to practice: an overview of MIMO space-time coded wireless systems," IEEE J. Select. Areas Commun., vol. 21, no. 3, pp. 281-302, April 2003. 3:m-MIMO E. G. Larsson, O. Edfors, F. Tufvesson and T. L. Marzetta, “Massive MIMO for next generation wireless systems," IEEE Commun. Mag., vol. 52, no. 2, pp. 186-195, February 2014. 4:ML Xu Zhu and R. D. Murch, “Performance analysis of maximum likelihood detection in a MIMO antenna system," IEEE Trans. Commun., vol. 50, no. 2, pp. 187-191, February 2002. 5:LR D. Wubben, R. Bohnke, V. Kuhn and K. -D. Kammeyer, “Near-maximum-likelihood detection of MIMO systems using MMSE-based lattice- reduction," 2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), Paris, France, 2004, pp. 798-802 Vol.2. 6:LRa Huan Yao and G. W. Wornell, “Lattice-reduction-aided detectors for MIMO communication systems," Global Telecommunications Conference, 2002. GLOBECOM '02. IEEE, Taipei, Taiwan, 2002, pp. 424-428 vol.1. 7:FGMP F. R. Kschischang, B. J. Frey and H. -A. Loeliger, “Factor graphs and the sum-product algorithm," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498-519, February 2001. 8:FGEP S. Wu, L. Kuang, Z. Ni, J. Lu, D. Huang and Q. Guo, “Low-complexity iterative detection for large-scale multiuser MIMO-OFDM systems using approximate message passing," IEEE J. Sel. Top. Signal Process., vol. 8, no. 5, pp. 902-915, October 2014. 9:FGEPCMP X. Tan, Y. -L. Ueng, Z. Zhang, X. You and C. Zhang, “A low-complexity massive MIMO detection based on approximate expectation propagation," IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7260-7272, August 2019. 10:UNCSI O. Oyman, R. U. Nabar, H. Bolcskei and A. J. Paulraj, “Characterizing the statistical properties of mutual information in MIMO channels," IEEE Trans. Signal Process., vol. 51, no. 11, pp. 2784-2795, November 2003. 11:IIDMIMO Zhengdao Wang and G. B. Giannakis, “Outage mutual information of space-time MIMO channels," IEEE Trans. Inf. Theory, vol. 50, no. 4, pp. 657-662, April 2004. 12:CORREI L. Musavian, M. R. Nakhai, M. Dohler and A. H. Aghvami, “Effect of channel uncertainty on the mutual information of MIMO fading channels," IEEE Trans. Veh. Technol., vol. 56, no. 5, pp. 2798-2806, September 2007. 13:mMIMOAna P. Yang, Q. Zou and H. Yang, “Message passing based calculation of MI and MMSE matrix for massive MIMO systems with finite-alphabet inputs," IEEE Commun. Lett., vol. 25, no. 12, pp. 3824-3828, December 2021. 14:MATADC H. Gao, K. Xiao, B. Xia and Z. Chen, “Mutual information analysis of mixed-ADC MIMO systems over Rayleigh channels based on random matrix theory," IEEE Trans. Wirel. Commun., vol. 19, no. 7, pp. 4894-4906, July 2020. 15:BLAST P. W. Wolniansky, G. J. Foschini, G. D. Golden and R. A. Valenzuela, “V-BLAST: an architecture for realizing very high data rates over the rich-scattering wireless channel," 1998 URSI International Symposium on Signals, Systems, and Electronics. Conference Proceedings (Cat. No.98EX167), Pisa, Italy, 1998, pp. 295-300. 16:VBLAST S. Stiglmayr, J. Klotz and M. Bossert, “Mutual information of V-BLAST transmission," 2008 IEEE International Symposium on Wireless Communication Systems, Reykjavik, Iceland, 2008, pp. 468-472. 17:STAIR F. Jiang, C. Li, Z. Gong and R. Su, “Extrinsic information analysis of a new iterative method using the stair matrix for massive MIMO uplink signal detection," IEEE Wireless Commun. Lett., vol. 7, no. 6, pp. 1022-1025, December 2018. 18:EXITMIMO T. Abiko et al., “An EXIT chart analysis for belief-propagation based detection in a large-scale MIMO system," 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), Dresden, Germany, 2013, pp. 1-5. 19:EXITMIMO2 H. Li, J. Guo, X. Wang, C. Cao and Z. Fei, “EXIT-aided scheduled iterative MIMO detection under non-homogeneous antenna propagation gain scenarios," IEEE Trans. Veh. Technol., vol. 71, no. 10, pp. 10600-10614, October 2022. 20:Conver Paul A. Samuelsos, “A convergent iterative process,” J. Math. Phys., vol. 24, no. 1-4, pp. 131-134, April 1945. 21:Jfunc Stephan ten Brink, “Convergence of Iterative Decoding,” Electronics Letters, vol. 35, no.10, May 1999. 22:ERF Andrews, Larry C, “Special functions of mathematics for engineers,” British, SPIE Optical Engineering Press, 1998. 23:MIMOR S. Yang and L. Hanzo, “Fifty years of MIMO detection: The road to large-scale MIMOs,” IEEE Commun. Surv. Tutor., vol. 17, no. 4, pp. 1941-1988, Fourthquarter 2015. 24:MICAL Thomas M. Cover and Joy A. Thomas, “Elements of information theory: Differential entropy,” John Wiley & Sons, Ltd, pp. 243-259, 2005.
http://arxiv.org/abs/2307.01046v1
20230703142347
A Fine-Grained Classification of the Complexity of Evaluating the Tutte Polynomial on Integer Points Parameterized by Treewidth and Cutwidth
[ "Isja Mannens", "Jesper Nederlof" ]
cs.CC
[ "cs.CC", "cs.DS" ]
Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning Yu-Chin HsuInstitute of Economics, Academia Sinica, 128, Section 2, Academia Road, Nankang, Taipei 115, Taiwan. E-mail: . Academia Sinica Martin HuberUniversity of Fribourg, Department of Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland. E-mail: . University of Fribourg Yu-Min YenDepartment of International Business, National Chengchi University, 64, Section 2, Zhi-nan Road, Wenshan, Taipei 116, Taiwan. E-mail: . National Chengchi University August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================ We give a fine-grained classification of evaluating the Tutte polynomial T(G;x,y) on all integer points on graphs with small treewidth and cutwidth. Specifically, we show for any point (x,y) ∈ℤ^2 that either * T(G; x, y) can be computed in polynomial time, * T(G; x, y) can be computed in 2^O(tw)n^O(1) time, but not in 2^o(ctw)n^O(1) time assuming the Exponential Time Hypothesis (ETH), * T(G; x, y) can be computed in 2^O(tw log tw)n^O(1) time, but not in 2^o(ctw log ctw)n^O(1) time assuming the ETH, where we assume tree decompositions of treewidth tw and cutwidth decompositions of cutwidth ctw are given as input along with the input graph on n vertices and point (x,y). To obtain these results, we refine the existing reductions that were instrumental for the seminal dichotomy by Jaeger, Welsh and Vertigan [Math. Proc. Cambridge Philos. Soc'90]. One of our technical contributions is a new rank bound of a matrix that indicates whether the union of two forests is a forest itself, which we use to show that the number of forests of a graph can be counted in 2^O(tw)n^O(1) time. § INTRODUCTION We study the parameterized complexity of computing the Tutte Polynomial. The Tutte polynomial is a graph invariant that generalizes any graph invariant that satisfies a linear deletion-contraction recursion. Such invariants include the chromatic, flow and Jones polynomials, as well as invariants that count structures such as the number of forests or the number of spanning subgraphs. Due to its generality the Tutte polynomial is of great interest to a variety of fields, including knot theory, statistical physics and combinatorics. For a number of these fields it is important to understand how difficult it is to compute the Tutte polynomial. A series of papers, culminating in the work by Jaeger, Vertigan, and Welsh <cit.> has given a complete dichotomy showing that the problem of evaluating the Tutte polynmial is #P-hard on all points except on the following special points on which it is known to be computable in polynomial time: (1, 1), (-1, -1), (0, -1), (-1, 0), (i, -i), (-i, i), (j, j^2), (j^2, j), H_1 where j = e^2π i/3 and i=√(-1), and H_α denotes the hyperbola {(x,y): (x-1)(y-1) = α}. These hyperbolic curves turn out to be of great importance to understanding the complexity of the Tutte Polynomial, as the problem is generally equally hard on all points of the same curve, except for the special points listed in (<ref>). Further refinements of the result by <cit.> have since been made: Among others, a more fine-grained examination of the complexity was done by Brand et al. <cit.> (building on earlier work by Dell et. al. <cit.>): they showed that for almost all points the Tutte polynomial cannot be evaluated in 2^o(n) time on n-vertex graphs, assuming (a weaker counting version of) the Exponential Time Hypothesis. This is tight because, on the positive side, Björklund et al. <cit.> showed that the Tutte polynomial can be evaluated on any point in 2^nn^O(1) time. Another perspective worth examining is that of the parameterized complexity of the problem, when parameterized by width measures. This is a rapidly evolving field within parameterized complexity.[For example, the biennial Workshop on Graph Classes, Optimization, and Width Parameters (GROW) already had its 10'th edition recently <https://conferences.famnit.upr.si/event/22/>.] Intuitively, it is concerned with the effects of structural properties of the given input graph on its complexity. This often generates results that have greater practical value and give a deeper understanding of the problem, in comparison with classical worst-case analysis. It is therefore natural to ask what a complexity classification for the Tutte Polynomial would look like in this parameterized context. For the specific subject of evaluating the Tutte polynomial parameterized by width measures, research has already been done in this area over twenty years ago: Noble <cit.> has given a polynomial time algorithm for evaluation the Tutte Polynomial on bounded treewidth graphs. Noble mostly focused on the dependence on the number of vertices and edges, and showed each point of the Tutte polynomial can be evaluated in linear time, assuming the treewidth of the graph is constant. See also an independently discovered (but slower) algorithm by Andrzejak <cit.>. However, this glances over the exponential part of the runtime, i.e. the dependence on the treewidth. Since this is typically the bottleneck, recent work aims to refine our understanding of this exponential dependence with upper and lower bounds on complexity of the problem in terms of this parameter that match in a fine-grained sense. In this work, we extend this research line and determine the fine-grained complexity for each integer point (x,y) of the problem of evaluating the Tutte polynomial (x,y). As was done in previous works, we base our lower bounds on the Exponential Time Hypothesis (ETH) and the Strong Exponential Time Hypothesis (SETH) formulated by Impagliazzo and Paturi <cit.>. For a given width parameter k, the former will be used to exclude run times of the form k^o(k)n^O(1), while the latter will be used to exclude run times of the form (c-ϵ)^kn^O(1) for some constant c and any ϵ >0. Specifically we consider the treewidth, pathwidth and cutwidth of the graph. The first two, in some sense, measure how close the graph is to looking like a tree or path respectively. The cutwidth measures how many edges are layered on top of each other when the vertices are placed in any linear order. We will more precisely define these parameters in the preliminaries. Width measures in particular are interesting because instances where such structural parameters are small come up a lot in practice. For example, the curve H_2 corresponds to the partition function of the Ising model, which is widely studied in statistical physics, on graphs with particular topology such as lattice graphs or open/closed Cayley trees (<cit.>). In all such graphs with n vertices, even the cutwidth (the largest parameter we study) is at most O(√(n)). §.§ Our contributions Our classification handles points (x,y) differently based on whether (x-1)(y-1) is negative, zero or positive, and reads as follows: Let G be a graph with given tree, path and cut decompositions of width , and respectively. Let (x,y) ∈ℤ^2 be a non-special point, then up to some polynomial factor in |G|, the following holds. * If (x-1)(y-1) < 0 or x = 1, then T(G; x, y) can be computed in time ^O() and cannot be computed in time ^o() under ETH. * If y = 1, then T(G; x, y) can be computed in time O(4^) or O(64^) and cannot be computed in time 2^o() under ETH. * If (x-1)(y-1) = q > 1, then T(G; x, y) can be computed in time O(q^). Furthermore, * if x ≠ 0, then T(G; x, y) cannot be computed in time O((q-ϵ)^) under SETH. * if x=0, then T(G; x, y) cannot be computed in time O((q-ϵ)^) and O(q-ϵ)^/2) under SETH. This is a fine-grained classification for evaluating the Tutte polynomial at any given integer point, simultaneously for all the parameters treewidth, pathwidth and cutwidth. This is because if a graph has cutwidth , pathwidth and treewidth , then ≤≤. Our result implies that, for evaluating the Tutte polynomial at a given integer point, it does not give a substantial advantage to have small cutwidth instead of small treewidth. This is somewhat surprising since, for example, for computing the closely related chromatic number of a graph there exists a 2^ n^O(1) time algorithm, but any ^o() n^O(1) time algorithm would contradict the ETH <cit.>. Of particular interest are the upper bounds in Case for the points {(x,y): y = 1}, which are closely related to the problem of computing the number of forests in the input graph. One reason why this results stands out in particular is that it indicates an inherent asymmetry between the x- and y-axes, in this parameterized setting. In the general setting, problems related to the Tutte Polynomial often have a natural dual problem, which one can obtain by interchanging the x- and y-coordinates. For example the chromatic polynomial can be found (up to some computable term f) as χ_G(λ) = f(λ) T(1-λ, 0), while the flow polynomial can be found as C_G(λ) = g(λ) T(0, 1-λ). These two problems are equivalent on planar graphs, in the sense that the chromatic number of a planar graph is equal to the flow number of its dual graph. We note that for this curve we have an ETH bound, while for the other results of the form c^n^O(1) we have a stronger SETH bound. We suspect that a (4-ϵ)^n^O(1) lower bound for any ϵ >0, based on SETH, also holds for evaluating T(G; 2, 1), but that it will take significant additional technical effort. Techniques In order to get the classification, our first step follows the method of <cit.> to reduce the evaluation of T(G; x, y) for all points in hyperbola H_α = { (x,y) : (x-1)(y-1)=α} to the evaluation to a single point in H_α. This is achieved in <cit.> by some graph operations (stretch and thickening), but these may increase the involved width parameters. We refine these operations in Section <ref> to avoid this. With this step being made, several cases of Theorem <ref> then follow from a combination of new short separate and non-trivial arguments and previous work (including some very recent work such as <cit.>). However, for the upper bound in Case of Theorem <ref>, our proof is more involved. To get our upper bound, we introduced the forest compatibility matrix. Its rows and columns are indexed with forests (encoded as partitions indicating their connected components). An entry in this matrix indicates whether the union of the two forests forms a forest itself. This matrix is closely related to matrices playing a crucial role in the Cut and Count method <cit.> and rank based method <cit.> to quickly solve connectivity problems on graphs with small tree-width. However, the previous rank upper bounds do not work for bounding the rank of the forest compatibility matrix over the reals since we check for acyclicity instead of connectivity. We nevertheless show that this the rank of this matrix is 4^n; in fact the set of non-crossing partitions forms a basis of this matrix. We prove this via an inductive argument that is somewhat similar to the rank bound of 2^n/2-1 of the matchings connectivity matrix over GF(2) from <cit.>. Subsequently, we show how to use this insight to get a 2^O() algorithm to evaluate T(G; 1,2) (i.e. counting the number of spanning forests). §.§ Organization The remainder of this paper supports Theorem <ref>. In Section <ref> we describe some preliminaries. In Section <ref> we show how to reduce the task of computing all points along a hyperbola curve to a single point. We now describe where each part of Theorem <ref> can be found in the paper. The lower bound in Case is given in Theorem <ref> and <ref>. The upper bound in Case is given in Theorem <ref>. The lower bound in Case is by Dell et al. <cit.>. The upper bound in Case is given in Section <ref> (specifically, Theorems <ref> and <ref>). The lower bound in Case is given in Theorem <ref> (for q=2) and Theorem <ref> (for q>2). The upper bound bound in is given in Theorem <ref> (for q=2) and Theorem <ref> (for q>2). § PRELIMINARIES Computational Model In this paper we frequently have real (and some intermediate lemma's are even stated for complex) numbers as intermediate results of computations. However, as is common in this area we work in the word RAM model in which all basic arithmetic operations with such numbers can be done in constant time, and therefore this does not influence our running time bounds. Interpolation Throughout this paper we will use interpolation to derive a polynomial, given a finite set of evaluations of said polynomial. For our purposes it suffices to note that this can be done in polynomial time, for example by solving the system of linear equations given by the Vandermonde matrix and the evaluations (see e.g. <cit.>). Given pairs (x_0, y_0), …, (x_d, y_d), there exists an algorithm which computes the unique degree d polynomial p such that p(x_i) = y_i for i = 0, …, d and runs in time O(d^3). §.§ The Tutte polynomial There are multiple ways of defining the Tutte polynomial. In this paper we will only need the following definition T(G; x,y) = ∑_A ⊆ E (x-1)^k(A) - k(E) (y-1)^k(A) + |A| - |V|, where k(A) denotes the number of connected components of the graph (V,A). We will often use the following notation H_α = {(x,y) : (x-1)(y-1) = α}. Note that these curves form hyperbolas and that for α = 0 the hyperbola collapses into two orthogonal, straight lines. We refer to these two lines as separate curves H_0^x = {(x,y) : x = 1 }, H_0^y = {(x,y) : y = 1 }. Throughout the paper we will refer to the problem of finding the value of T(G; a,b) for an individual point as computing the Tutte polynomial on (a,b). We will often restrict the Tutte polynomial to a one-dimensional curve H_α. Note that in this case the polynomial can be expressed as a univariate polynomial T_α(G;t) := T(G; α/t + 1, t + 1). We will refer to the problem of finding the coefficients of T_α as computing the Tutte polynomial along H_α. trg:H1PolAs mentioned in the introduction, the Tutte polynomial is known to be computable in polynomial time on the points (1,1), (-1,-1), (0,-1), (-1,0), (i, -i), (-i, i), (j, j^2), (j^2, j) and along the curve H_1 and it is #P to evaluate it on any other point. We call the points listed in (<ref>), along with the points on the curve H_1 special points. See <cit.> for more details. §.§ Width measures We consider the width measures treewidth, pathwidth and cutwidth of a graph G (denoted respectively with tw(G), pw(G) and ctw(G)), defined as follows: Treewidth and pathwidth A tree decomposition of a graph is given by a tree 𝕋 and a bag B_x ⊆ V for each x ∈ V(𝕋), with the following properties. * For every v ∈ V(G) there is some x such that v ∈ B_x. * For every uv ∈ E(G) there is some x such that u,v ∈ B_x. * For every v ∈ V(G), the set {x ∈ V(𝕋) : v ∈ B_x} induces a subtree of 𝕋. The width of such a decomposition is defined as max_x(|B_x|) - 1 and the treewidth of a graph is defined as the minimum width among its tree decompositions. The pathwidth of a graph is defined in a similar way, except 𝕋 has to be a path instead of a tree. We will often think of these decompositions as being rooted at some node r and will refer to the neighbour x of y on the unique path from y to r as the parent of y and to y as the child of x. We will refer to set of nodes y whose unique y-r path visits x as the descendants[Note that under this definition, x is a descendant of itself.] of x. The union of the bags corresponding to descendants of x will be denoted G_x and we will refer to it as the part of the graph G that lies below x. Finally we note that we may assume that we are given a so called nice decomposition. A nice tree decomposition contains only the following types of bags. * Leaf bag: B_x = ∅ and x has no children. * Node-forget bag: B_x = B_y ∖{v}, where y is the unique child of x and v ∈ B_y. * Node-introduce bag: B_x = B_y ∪{v}, where y is the unique child of x and v ∈ V(G) ∖ B_y. * Join bag: B_x = B_y_1 = B_y_2, where y_1 and y_2 are the two children of x. We may also assume that the decomposition has so called edge-introduce bags. These have a unique child with the same bag, however they are labeled with an edge between two vertices in its bag. The idea behind this is that we can pretend like an edge doesn't exist, until it gets introduced by some bag. We will assume that edges are always introduced exactly once. Cutwidth A cut decomposition of a graph G is simply an ordering v_1, …, v_n of the vertex set. The width of such a decomposition is defined as the maximum number of edges 'crossing' any cut of the ordering. Formally for an integer i we say an edge v_jv_l crosses the i-th cut, if j ≤ i < l. Again, the cutwidth of G is the minimum width among all cut decompositions. §.§ Brylawski's tensor product formula In Section <ref> we will make use of Brylawski's tensor product formula <cit.> to reduce the computation of T(G;x,y) to that of T(G;x',y') for some other point (x', y'). The original formula is formulated in terms of pointed matroids, however we will only need the formulation for (multi)graphs. Before we can state the formula, we first need to introduce some notation. Given graphs G and H, where an edge e ∈ E(H) is labeled as a special edge, we define the pointed tensor product[Note that this is different from the standard tensor product for graphs.] G ⊗_e H of G and H as the graph given by the following procedure. For every edge f ∈ E(G) we first create a copy H_f of H, then identify f with the copy of the edge e in H_f and finally remove the edge f (and thus also the edge e) from the graph. Intuitively it might be easier to think of this product as replacing every edge of G with a copy H ∖ e, where two of the vertices in H are designated as gluing points. For example one could replace every edge with a path of length k by taking as H the cycle C_k+1 on k+1 vertices, as seen in figure <ref>. Note that this is not always well-defined, as one can choose which endpoint is identified with which. It turns out that this choice does not affect the graphic matroid of G ⊗_e H and thus it does not affect the resulting Tutte polynomial. In this paper we will only consider graphs H that are symmetric over e and thus the product is actually well-defined. We are now ready to state Brylawski's tensor product formula. Let T_C and T_L be the unique polynomials that satisfy the following system of equations (x-1)T_C(H; x, y) + T_L(H; x, y) = T(H∖ e; x, y) T_C(H; x, y) + (y-1)T_L(H; x, y) = T(H/ e; x, y). We define x' = T(H\ e; x, y)/T_L(H; x, y) y' = T(H/ e; x, y)/T_C(H; x, y). Let n = |V(H)|, m = |E(H)| and k = k(E(H)). Brylawski's tensor product formula states that T(G ⊗_e H; x, y) = T_C(H; x, y)^m - n + k T_L(H; x, y)^n - k T(G; x', y'). § REDUCING ALONG THE CURVE H_Α In this section we describe how we can lift hardness results from a single point (a, b) ∈ H_α to the whole curve H_α. We summarize the results from this section in the following theorem. Let (a,b) ∈ℂ^2. Also let T(G; x, y) be the Tutte polynomial of G and α := (a-1)(b-1). There exists a polynomial time reduction from computing T on (a,b) for graphs of given tree-, path- or cutwidth, to computing T along H_α for graphs with the following width parameters. * If |a| ∉{0,1} or if |b| ∉{0,1} and a ≠ 0, then the treewidth remains (G). The cutwidth and pathwidth become at most (G) + 2 and (G) + 2 respectively. * If |b| ∉{0,1} and a = 0, then the treewidth remains (G). The pathwidth becomes at most (G) + 2 and the cutwidth becomes at most 2(G). * If |a|, |b| ∈{0,1}, then the treewidth remains (G). The pathwidth becomes at most (G) + 2 and the cutwidth becomes at most 12(G). Theorem <ref> lets us lift both algorithms and lower bounds from a point (a,b) to the whole curve H_α. While our main theorem only requires Theorem <ref> to be stated for integer valued points, we will state it as the most general version we can prove. We note that for Case of Theorem <ref>, we do not care too much about constant multiplicative factors in the cutwidth, since we have an ETH bound of the form (G)^o((G)). For Case we only need the bounds on the treewidth and pathwidth. Thus the blowup in the cutwidth is only relevant for Case . In this case the only integer valued points that fall under the third item of Theorem <ref> are (-1, 0), (0,-1) and (-1,-1). These are all special points, which means that this item is not relevant for Case . In our proofs we will make use of the following transformations. Let G be a simple graph. We define the k-stretch ^kG of G as the graph obtained by replacing every edge by a path of length k. We define the k-thickening _kG of G as the graph obtained by replacing every edge by k parallel edges. A new variant we introduce to keep the cutwidth low is defined as follows: We define the insulated k-thickening _(k)G as the graph obtained by replacing every edge by a path of length 3 and then replacing the middle edge in each of these paths by k parallel edges. §.§ Effect on width parameters We now give three lemmas that show how these transformations effect the parameters we use. Let G be a graph. Then we have that (^kG) ≤(G), (_kG) ≤(G) and (_(k)G) ≤(G). First note that parallel edges do not affect the treewidth of a graph, since any bag covering one of these edges will necessarily cover all of them. This means that the original tree decomposition is also a tree decomposition for the k-thickening _kG. It also means that for the purposes of finding a tree decomposition, the insulated k-thickening is equivalent to a 3-stretch. It remains to show that the k-stretch does not increase the treewidth. Note that (G) = 1 if and only if G is a tree. Since the k-stretch of a tree is also a tree, we find (^kG) = 1. Now suppose that (G) ≥ 2. We will show that subdividing an edge does not affect the treewidth of the graph. By repeatedly subdividing edges we then find that the treewidth of the k-stretch ^kG is at most that of G. Let uv ∈ E(G) and let G' be the graph obtained by subdividing uv into uw and wv. Let x be some node in the tree decomposition of G such that u,v ∈ B_x. We create a tree decomposition of G' by adding a node x', with a corresponding bag B_x' = {u, v, w}, and connecting x' to x. It is easy to see that the resulting decomposition is still a tree decomposition. Let G be a graph. Then we have that (^kG) ≤(G) + 2, (_kG) ≤(G) and (_(k)G) ≤(G) + 2. Like in the previous proof, we note that parallel edges do not affect the pathwidth of a graph. This means that the original path decomposition is also a path decomposition for the k-thickening _kG. Again, this also means that for the purposes of finding a path decomposition, the insulated k-thickening is equivalent to a 3-stretch. It remains to show that the k-stretch does not increase the pathwidth by more than an additive factor of 2. Suppose we are given a path decomposition of G, of width (G). Whenever a new vertex v is introduced in a bag B_x, in the path decomposition of G, we add the following bags. For each edge uv ∈ E(G) such that u ∈ B_x let w_1^uv, …, w_k^uv = v be the path replacing uv in ^kG. We add the bags B_x ∪{w_i^uv, w_i+1^uv} for i = 1, k in order, one path at a time. Clearly the decomposition has width (G) + 2 and every vertex and edge of ^kG is covered. Since the intermediate vertices on the paths only appear in two consecutive bags and vertex from G is retained until all paths have been covered, no vertices are forgotten and then reintroduced. We find that it is a valid path decomposition and thus (^kG) ≤(G) + 2. Let G be a graph. Then we have that (^kG) ≤(G), (_kG) ≤ k(G) and (_(k)G) ≤(G) + k - 1. In each case we will show that, given a cut decomposition of width (G), we can construct a decomposition that respects the given bounds. Let π = (v_1, …, v_n) be a cut decomposition of G, of width (G). First note that the given decomposition already gives a decomposition of _kG, of width k(G). Next, we will examine the k-stretch ^kG. We will show that subdividing an edge does not increase the cutwidth. By repeatedly subdividing edges we then find that the k-stretch does not have larger cutwidth. Let uv ∈ E(G) such that u < v in a given cut decomposition π = (v_1, …, v_n) of G. We create a new graph G' from G, by adding a vertex w, edges uw and wv, and removing the edge uv. We construct a cut decomposition π' of G' as follows. If v_i, v_j ∈ V(G), such that i < j (in π), then we also set v_i < v_j in π'. For w, we set w < v_i if u < v_i in π and v_i < w otherwise. Note that for any cut of the decomposition, we have one of the following situations. (i) The cut appears before u or after v, in which case the cut contains the same edges in both decompositions. (ii) The cut appears[In this case we use the convention that the cut immediately before and the cut immediately after w get associated with the same cut π.] between u and v, in which case it contains uv in π and either uw or wv in π', but not both. In either case we find that the cut has not increased in width in π' and thus the decomposition has the same width. Finally we examine the insulated k-thickening. As seen before we can subdivide edges without increasing the cutwidth. We will first create the 3-stretch of G, by subdividing each edge twice. We will take special care to fully subdivide an edge before moving on to the next one, so that the two new vertices on the edge appear next to each other in the cut decomposition. We then replace the middle edge of each created 3-path with k parallel edges. Since the endpoints of any such bundle of edges are next to each other in the cut decomposition, each cut contains at most one bundle and thus we increase the cutwidth by at most k-1. We remark that the only significant blowup is that of the cutwidth, when applying the k-thickening. We will therefore limit our use of this transformation as much as possible. §.§ Reductions We can now prove Theorem <ref>. We will split the theorem into multiple separate cases and prove each case as a separate lemma. As such the proof of Theorem <ref> simply consists of Lemmas <ref>, <ref>, <ref> and <ref>. Note that the last point follows from first applying Lemma <ref> and then one of the other lemmas. Let (a,b) ∈ℂ^2 be a point with |a| ∉{0,1}. Also let T(G; x, y) be the Tutte polynomial of G and α := (a-1)(b-1). There exists a polynomial time reduction from computing T on (a,b) for graphs of given tree-, path- or cutwidth, to computing T along H_α for graphs with the following with parameters. The treewidth and cutwidth remain (G) and (G) respectively. The pathwidth becomes at most (G) + 2. We prove this lemma using essentially the same proof as given in <cit.>. Note that in our setting we use Lemmas <ref>, <ref> and <ref> to ensure that relevant parameters are not increased by the operations we perform. By Brylawski's tensor product formula <cit.>, we find the following expression for the k-stretch of the graph G (1 + a + … + a^k-1)^k(E)T(G; a^k, b + a + … + a^k-1/1 + a + … + a^k-1) = T(^kG; a, b). Note that a^k - 1 = (1 + a + ⋯ + a^k-1)(a-1) and b + a + … + a^k-1/1 + a + … + a^k-1 - 1 = b -1/1 + a + … + a^k-1. We find that the point on which we evaluate T(G) in (<ref>) also lies on H_α. By examining the formula for the Tutte polynomial, we find that for n = |V(G)| the degree of the Tutte polynomial is at most n^2 + n. By choosing k = 0, …, n^2 + n, since |a| ∉{0,1}, we can find T(G; x,y), for n^2 + n+1 different values of (x, y) ∈ H_α. By lemma <ref>, we can now interpolate the univariate restriction T_α(G; t) = T(G; α/t + 1, t + 1). of T(G) along H_α. Note that by Lemmas <ref> and <ref> the k-stretch preserves both the cutwidth and the treewidth of the graph and by Lemma <ref> the pathwidth increases by a constant factor. We find that any fine-grained parameterized lower bound for H_α extends to points (a,b). The next lemma is proven in a similar way, however it takes a bit more effort to make the numbers line up. Let (a,b) ∈ℂ^2 be a point with |b| ∉{0,1} and a ≠ 0. Also let T(G; x, y) be the Tutte polynomial of G and α := (a-1)(b-1). There exists a polynomial time reduction from computing T on (a,b) for graphs of given tree-, path- or cutwidth, to computing T along H_α for graphs with the following with parameters. The treewidth remains (G). The cutwidth and pathwidth become at most and (G) + 2 and (G) + 2 respectively. By Lemma <ref> we may assume that |a| = 1. In the case that a = 1 we can still use the k-stretch, since (1 + a + … + a^k-1)^-k(E)T(^kG; a, b) = T(G; a^k, b + a + … + a^k-1/1 + a + … + a^k-1) = T(G; 1, b + k - 1/k) = T(G; 1, b - 1/k + 1). Where the first equality is (<ref>). Since b ≠ 1 we can find arbitrarily many points on the curve H_0^x this way. By lemma <ref> we can interpolate to find the T(G) on the whole curve. In the remaining case, i.e. 0 ≠ a ≠ 1, we use the insulated k-thickening. This results in the following transformation. T(_(k)G; a, b) = ((a+1)(1 + b + … + b^k-1) + a^2)^k(E)(1 + b + … + b^k-1)^|V| - k(E)T(G; A, B ) where A = a^2(1 + a-1/1 + b + … + b^k-1) = a^2(a + b + … + b^k-1/1 + b + … + b^k-1) B = 1 + b^k - 1/(a + 1)(1 + b + … + b^k-1) + a^2. Which allows us to move to a point with |A| ∉{0,1}, assuming that |b| ∉{ 0,1} and 1 ≠ a ≠ 0. Note that by Lemma <ref> this transformation only increases the cutwidth by an additive factor of k-1. It is not too dificult to see that it suffices to take either k = 2 or k = 3. Suppose that k = 2 does not work, then we have |a + b/1 + b| = 1. From this we can deduce that b = c · a^1/2 for some c ∈ℝ. Now suppose that in this case k = 3 also does not work. We then find that |a + c· a^1/2 + c^2· a/1 + c· a^1/2 + c^2· a| = |a^1/2 + c + c^2· a^1/2/1 + c· a^1/2 + c^2· a| = 1. By squaring this term and simplifying the resulting equations we find a = 1 and note that this case was handled previously. In the case that a = 0 (and b ≠ 0), we first use the 2-thickening to compute (1 + b)^|V| - k(E)T(G; a + b/1 + b, b^2) = T(_2G; a, b). and then continue with other transformations. Note that this approach increases the cutwidth by a factor of 2 and thus for any lower bound f() we would get on the curve H_α, we find a lower bound of f(/2) for the point (0,1 - α). We summarize this in the following Lemma. Let (a,b) ∈ℂ^2 be a point with |b| ∉{0,1} and a = 0. Also let T(G; x, y) be the Tutte polynomial of G and α := (a-1)(b-1). There exists a polynomial time reduction from computing T on (a,b) for graphs of given tree-, path- or cutwidth, to computing T along H_α for graphs with the following with parameters. The treewidth remains (G). The pathwidth becomes at most (G) + 2 and the cutwidth becomes at most 2(G). The remaining case concerns points where |a|, |b| ∈{0,1}. We show that we can reduce non-special points of this type to a point that is covered by one of the previous lemmas. Let (a,b) ∈ℂ^2, such that |a|, |b| ∈{0,1}. If (a,b) is not one of the 8 special points or on H_1, then there exists some transformation f and a computable function g, such that g(a,b) ≠ 0 and T(f(G); a, b) = g(a,b)T(G; a', b') where either |a'| ∉{0,1} or |b'| ∉{0,1} and such that (f(G)) ≤(G), (f(G)) ≤(G) + 2 and (f(G)) ≤ 6(G). We adapt a proof due to <cit.>. Suppose that for every such transformation f we have either |a'|, |b'| ∈{0,1} or a' and b' are not well-defined. We will show that in this case (a,b) must be one of the 8 special points or on H_1. First assume that |a| = |b| = 1. Note that applying the 2-stretch gives (a', b') = (a^2, b+a/1+a). By assumption, we have either a = -1 or |b+a/1+a| ∈{0,1}, which implies b = -a, b = a^2 or b = 1. Using the 2-thickening we find b = -1, a = -b, a = b^2 or a = 1. This reduces the list of possible points to (a,b) ∈ { (1,1), (-1,-1), (j, j^2), (j^2, j), (-1, i), (-1, -i), (i, -1), (-i, -1)}∪{ (a, -a): |a| = 1 } This list can be further reduced by applying the 3-stretch and 3-thickening to find that only (a,b) ∈ { (1,1), (-1,-1), (j, j^2), (j^2, j), (-i, i), (i, -i) } remain. Now suppose that |b| = 0. Note that, again applying the 2-stretch gives (a', b') = (a^2, a/1+a). By assumption we have either a = -1, a = 0 or |1 + a| = 1. In the last case we must have a ∈{0, j, j^2}. If a ∈{0, j, j^2}, then (a', b') ∈{(j, -j), (j^2, -j^2} and we may apply a 3-stretch or 3-thickening by the previous case. We find that the only points left are (-1,0) and (0,0), which lies on H_1. Using the 2-thickening and a further 3-stretch or 3-thickening we find that the only points of the form (0,b) are (0,0), (0,-1). Note that the worst blowup in the cutwidth occurs when we apply a 2-thickening, followed by a 3-thickening, which effectively results in a 6-thickening and thus a multiplicative blowup in the cutwidth of 6. § COUNTING FORESTS In this section we consider the problem of counting the number of forests in a graph. This problem corresponds to the point (2,1) and thus by Theorem <ref> any bounds found for this problem can be lifted to the whole curve H_0^y. We trivially get the following lower bound from existing bounds on the non-parameterized version of the problem <cit.>. Computing the Tutte polynomial along the curve H_0^y cannot be done in time 2^o((G)) n^O(1), unless #ETH fails. To complement this lower bound, we give an algorithm to count the number of forests in a graph G in c^(G) time. The algorithm uses a rank based approach, the runtime of which depends on the rank of the so called forest compatibility matrix. We start by introducing this matrix and examining its rank. §.§ Notation We will use the notation [n] = {1, …, n}. Unless stated otherwise, we will assume the set [n] to be ordered. For sets A, B ⊆ [n], we will write A < B to indicate that a < b for all a ∈ A and b ∈ B. We write π⊢ S to indicate that π is a partition of S. We write π|_S for the partition given by restricting elements of π to the set S ⊆ [n]. Given two partitions π_1 ⊢ S and π_2 ⊢ S, we say that π_1 is coarser than π_2, written π_1 ≥π_2, if every element of π_2 is subset of on element of π_1. Given two partitions π⊢ S and ρ⊢ S' we define the join π⊔ρ⊢ S ∪ S' of the partitions as the finest partition of S ∪ S' such that both (π⊔ρ)|_S ≥π and (π⊔ρ)|_S'≥ρ. Intuitively put π and ρ together and merge and overlapping elements. We will consider matrices indexed by partitions. We will write M[π, ρ] for the element in the row corresponding to π and the column corresponding to ρ. We will write M[π] for the vector containing all elements in the row corresponding to π. §.§ Rank bound In this section we prove the following theorem, for the so called forest compatibility matrix F_n. The rank of F_n is at most C_n, the n^th Catalan number. In particular (F_n) = O(4^n n^-3/2) Before we can define the forest compatibility matrix, we first need the following definitions. We say that a boundaried graph G = ([n] ∪ V, E), with boundary [n], is a representative forest for a partition π⊢ [n], if for every S ∈π there is some connected component C ⊆ V(G) such that C ∩ [n] = S. Given two boundaried graphs G and H, both with boundary B, we define the glue G ⊕ H of G and H as follows. First take the disjoint union of G and H. Then identify each v ∈ B in G with its analogue in H. This definition shows how one can relate forests and partitions. Throughout the section we will mostly consider partitions as they capture only the information we need. The following definition elaborates on this by lifting the concept of cycles in a clue of two trees to a cycle inducet by two partitions. Let π, ρ⊢ [n] and let G_π and G_ρ be representative forests of π and ρ respectively. We say that π and ρ induce a cycle if G_π⊕ G_ρ contains a cycle. It is not hard to see that it does not matter which representatives G_π and G_ρ we choose, since one only needs to know the connected components on [n]. This means that this definition is indeed well-defined. For this same reason, in the following definition, we only need a row and column for each partition of the separator. We define the forest compatibility matrix F_S of a set S by F_S[π, ρ] := 0 if π and ρ induce a cycle 1 otherwise for any π, ρ⊢ S. We will write F_n := F_[n]. Finally we will need the following definition to bound the rank of the forest compatibility matrix. We say that two sets A, B ∈π are crossing on an ordering <, if there are a_1, a_2 ∈ A and b_1, b_2 ∈ B such that a_1 < b_1 < a_2 < b_2 or b_1 < a_1 < b_2 < a_2. If a partition contains two crossing sets, we refer to it as a crossing partition. Throughout this section it will sometimes be convenient to think of the ordering as a permutation. The general idea behind the proof of Theorem <ref> is to show that any partition can be `uncrossed', i.e. its row in F_n can be written as a linear combination of rows, corresponding to non-crossing partitions. §.§.§ Manipulating partitions For the proof of Theorem <ref> we will need the following operations, which will allow us to manipulate partitions by contracting an expanding intervals and projecting down to subsets of the ground set. An interval is a subset I ⊆ [n] of consecutive numbers, i.e. there is no b ∉ I such that a_1 < b < a_2 for some a_1, a_2 ∈ I. Given an interval I and a partition π of [n], we define the contraction π -_i I of π by I as the partition of the set [n] -_i I := ([n]∪{i})∖ I given by we merging all sets that intersect I and replacing I by a single element i, i.e. π -_i I := {S ∈π : S ∩ I = ∅}∪{( ⋃{ S ∈π : S ∩ I ≠∅}∪{i}) ∖ I }. If we have an ordering on [n], we place i in the same place in the ordering as I, that is for any a ∈ [n] ∖ I and b ∈ I, we have a < b if and only if a < i. We define the blowup π +_i I of π by I as the partition of the set [n] +_i I := ([n]∪ I)∖{i}, given by adding all elements of I to the set that contains i and then removing i, i.e. π +_i I := {S ∈π : i ∉ S }∪{(S ∖{i}) ∪ I : i ∈ S}. Again we place I in the same place in the ordering as i. We will sometimes abuse notation and refer to [n] -_i I as simply [n'] for n' = n - |I| + 1. We now turn our attention to a number of useful lemmas. The first lemma intuitively says that if we contract an interval contained in some partition, then any decomposition of the resulting smaller partition gives the same decomposition of the larger partition. Let π be a partition of [n] and let I be an interval such that I ⊆ S ∈π. We set n' = n - |I| + 1. Suppose that for some set of partitions ℛ of [n'], we have F_n'[π -_i I] = ∑_ρ∈ℛ a_ρ F_n'[ρ]. Then F_n[π] = ∑_ρ∈ℛ a_ρ F_n[ρ +_i I]. Let χ be some partition of [n]. Note that if |S' ∩ I| ≥ 2 for some S' ∈χ, we have that F_n[π, χ] = F_n[ρ +_i I, χ] = 0. Thus we may assume that χ contains no such sets. Also note that if there is some cycle that only requires I and not the rest of S, then again we have that F_n[π, χ] = F_n[ρ +_i I, χ] = 0. Thus we may assume that any cycle induced by χ and π that has a set that intersects I, also requires a set that intersects S ∖ I, but not I. We now claim that for χ with the above assumptions we have F_n[ρ +_i I, χ] = F_n[ρ, χ-_i I] for any ρ. This would immediately imply that for such χ F_n[π, χ] = F_n[π -_i I, χ -_i I] = ∑_ρ∈ R a_ρ F_n[ρ, χ -_i I] = ∑_ρ∈ R a_ρ F_n[ρ +_i I, χ], which proves the lemma. First note that if ρ and χ-_i I induce a cycle, that does not involve i, then ρ +_i I and χ also induce that same cycle and vice versa. Now suppose that ρ +_i I and χ induce a cycle involving I, then there is some S' in the cycle that intersects I. By assumption there is also some set S”∈χ in the cycle, that intersects S ∖ I, but not I. W.l.o.g. the cycle does not loop back on itself and thus these sets are the only two in the cycle that intersect S. Note that S' gets merged into the set containing i, but S” does not. Since the rest of the cycle does not involve I, it is unaffected and thus the cycle remains intact after contraction. In the reverse direction we assume that ρ and χ-_i I induce a cycle involving i, then it is clear to see that this cycle survives after blowing up i, using one of the sets in χ that intersect I. This proves the claim and thus the lemma. This next lemma intuitively says that if we project our partition to a subset of the ground set, then any decomposition of the resulting smaller partition gives the same decomposition of the larger partition. Let π be a partition of [n] and let n' < n. Suppose that for some set of partitions ℛ of [n'], we have F_n'[π|_[n']] = ∑_ρ∈ℛ a_ρ F_n'[ρ], then F_n[π] = ∑_ρ∈ℛ a_ρ F_n[ρ⊔π|_[n]∖[n']]. Let χ be some partition of [n]. If χ and π|_[n]∖[n'] induce a cycle, then the statement trivially holds. In the rest of the proof we will therefore assume that any cycle induced by χ and π requires the use of π|_[n']. We first define an equivalence relation ∼ on [n] by defining two elements to be equivalent if they are either in the same set of χ or in the same set of π|_[n]∖[n']. We then complete this to a full equivalence relation. We now define the partition χ' of [n'] as the set of equivalence classes of ∼, restricted to [n']. We claim that F_n[ρ⊔π|_[n]∖[n'], χ] = F_n'[ρ, χ'] for any ρ, which would immediately imply that F_n[π, χ] = F_n'[π|_[n'], χ'] = ∑_ρ∈ℛ a_ρ F_n'[ρ, χ'] = ∑_ρ∈ℛ a_ρ F_n[ρ⊔π|_[n]∖[n'], χ] which proves the lemma. Suppose that ρ⊔π|_[n]∖[n'] and χ induce some cycle. Since the cycle must pass through [n'], there must be some path from one element of [n'] to another, induced by ρ⊔π|_[n]∖[n'] and χ. Since all elements in this path are equivalent, this path must lie entirely inside of a set S' ∈χ' and thus replacing such a path with S' results in a cycle induced by ρ and χ'. Note that if a cycle only requires sets from π|_[n'], this operation results in a single set S' from χ' in the new cycle. However, since any set involved in the old cycle must contain at least two elements in the path, that set together with S' induces a cycle. Similarly, in the reverse direction we take a cycle induced by ρ and χ' and blow up any sets of χ' into a path in the corresponding connected component to find a cycle induced by ρ⊔π|_[n]∖[n'] and χ. The following two lemmas help ensure that our operations do not introduce new crossings. The first of the two lemmas shows us that we can safely contract an interval, so long as it is contained in a set of the partition. Let I ⊆ [n] be an interval of [n]. Let π be a non-crossing partition of [n] -_i I. Then π +_i I is also non-crossing. Suppose that there are C, D ∈π +_i I that are crossing. W.l.o.g. there are c_1, c_2 ∈ C and d_1, d_2 ∈ D such that c_1 < d_1 < c_2 < d_2. Since π is non-crossing, this crossing does not exist in π and thus at least one of these elements is in I. By definition of a blowup, we must have either I ⊆ C or I ⊆ D. Since I is an interval, it then follows that exactly one of the previously mentioned elements is in I. We still find a crossing in π by replacing this element by i. For example, if d_1 ∈ I, we find a crossing c_1 < i < c_2 < d_2 in π. This again contradicts the assumption that π is non-crossing. We conclude that π +_i I is also non-crossing. This next lemma shows us that, in our setting, projection is safe, as long as we do not forget any elements of sets that cross one another. Let π⊢ [n] be a partition such that only A,B ∈π cross each other and all other pairs of sets in π are non-crossing. Then for a non-crossing partition ρ of A ∪ B we have that ρ∪π|_[n] ∖ (A ∪ B) is non-crossing. Suppose there are sets C, D ∈ρ⊔π|_[n] ∖ (A ∪ B) that cross each other. By assumption π|_[n] ∖ (A ∪ B) is non-crossing and thus w.l.o.g. C ∈ρ. Also note that since ρ is non-crossing, this implies that D ∈π|_[n] ∖ (A ∪ B). Let I be the interval spanned by A ∪ B, then since D crosses C ⊆ I, we find that D ∩ I ≠∅ and D is not an interval itself. We claim that this implies that, in π, D crosses either A or B. This would contradict the assumption that the only crossings in π are between A and B, which would then imply the lemma. Note that D ∩ I cannot include either the rightmost or the leftmost element of the interval, since these must be elements of A ∪ B. Therefore if neither A nor B crosses D, we must have that w.l.o.g. A < D ∩ I < B. This is not possible, since A and B must cross at least once. §.§.§ Proof of the rank bound With Lemmas <ref>, <ref>, <ref> and <ref> in hand, we are now ready to describe the main uncrossing operation. Let π be a non-crossing partition on an ordering p. In time O(n) we can find constants c_ρ, such that F_n[π] = ∑_ρ∈𝒩 c_ρ F_n[ρ], where 𝒩 is the set of partitions that are non crossing on p ∘ (i, i+1). Throughout the proof, we will consider the partition π on the ordering p ∘ (i, i+1). We first note that since π is non-crossing on p, any crossing of π must involve both i and i+1. Let i ∈ A ∈π and i+1 ∈ B ∈π. If A = B, then π is non-crossing and thus we may assume that A ≠ B. Note that π|_A ∪ B, when viewed as a partition of A ∪ B, consists of either 4 or 5 intervals which alternate between A and B. Define π' as the partition given by contracting these intervals. We find that π' is a partition on n' elements, where either n'=4 or n'=5 elements, with intervals of size 1 (see Figure <ref>). We can explicitly construct the forest compatibility matrices for n'∈{4,5} and check that the non-crossing partitions give a basis; with this manuscript we provided a MATLAB script that checks this. Thus we can write F_n'[π'] = ∑_ρ∈ℛc_ρF_n'[ρ], where ℛ is the set of non-crossing partitions of [n']. By Lemma <ref> we find that F_A∪ B[π|_A ∪ B] = ∑_ρ∈ℛc_ρF_A∪ B[ρ+_i_1 I_1 + … +_i_n' I_n']. By Lemma <ref> each ρ+_i_1 I_1 + … +_i_n' I_n' is still non-crossing. By Lemma <ref> we find F_n[π] = ∑_ρ∈ℛc_ρF_A∪ B[(ρ +_i_1 I_1 + … +_i_n' I_n') ∪π|_[n] ∖ (A ∪ B)]. By Lemma <ref> each (ρ+_i_1 I_1 + … +_i_n' I_n') ∪π|_[n] ∖ (A ∪ B) is still non-crossing. We conclude that F_n[π] can be written as a linear combination of rows corresponding to non-crossing partitions. Note that we can construct π' in O(n) time. We then find the c_ρ in O(1) time and reconstruct the (ρ+_i_1 I_1 + … +_i_n' I_n') ∪π|_[n] ∖ (A ∪ B) in O(n) time. By repeatedly applying Lemma <ref>, we can prove the following theorem. The rows corresponding to non-crossing partitions span a row basis of the forest compatibility matrix F_n. Let π be a partition of [n] such that we can turn it into a non-crossing partition by swapping two consecutive elements i and i+1 in the order of [n]. By Lemma <ref> we can write the row F_n[π] corresponding to π as a linear combination of rows corresponding to non-crossing partitions of [n]. This shows that, for B_p the set of rows corresponding to non-crossing partitions on p, we have B_p ∘ (i, i+1)⊆(B_p). Since every partition is non-crossing for some permutation and every permutation can be decomposed into 2-cycles on consecutive elements, this implies that every row can be written as a linear combination of rows corresponding to non-crossing partitions on some fixed ordering p. From this we immediately find a proof for Theorem <ref>. By Theorem <ref> the non-crossing partitions form a basis of F_n. Since there are C_n such partitions we find (F_n) ≤ C_n. §.§ Algorithm We will now describe the algorithm for counting forests. We first define the dynamic programming table and the notion of representation. We then handle each type of node in the tree/path decomposition separately and summarize at the end. Let G be a graph and let (𝕋, (B_x)_x ∈ V(D)) be a tree/path decomposition of G. Recall that G_x is defined as the graph induced by the union of all bags, whose nodes are descendants of x in 𝕋. We define the dynamic programming table τ by τ_x[π] := |{ X ⊆ E(G_x): (V,X) is acyclic , ∀ u,v ∈ B_x there is a path in (V, X) from u to v iff ∃ S ∈π s.t. u, v ∈ S}| In other words, the table entry τ_x[π] counts the number of forests in G_x whose connected components agree with π. In the rest of this section, we will refer to the number of nonzero entries τ_x[π] in a 'row' τ_x of the dynamic programming table as the support of τ_x, written (τ_x). Our aim will be to ensure that the support of our rows remains contained in the entries corresponding to non-crossing partitions for some ordering on the bag B_x. This is captured in the following definition. We say a vector a, indexed by partitions, is reduced on an ordering p, if a_π = 0 for any partition π that is crossing for p. In order to ensure that we do not lose any relevant information we will reduce our rows, while retaining the following property for the matrix F_B_x. Given a matrix M, we say that a vector a M-represents a vector b if Ma = Mb. We now describe how the algorithm behaves on the various types of nodes. In each case we repeatedly apply one step of a naive dynamic programming algorithm and then reduce the table if it becomes too big. For ease of notation we will write π∼ρ if the partitions π, ρ⊢ [n] are compatible, i.e. they do not induce a cycle. §.§.§ Leaf node If x is a leaf node we set τ_x'[∅] := τ_x[∅] = 1. We trivially find that τ_x' is reduced and F_0-represents τ_x. §.§.§ Vertex-introduce node Let x be a vertex-introduce node with a child node y. Suppose that τ_y' is reduced and F_B_y-represents τ_y. We can compute a row τ_x' that is reduced and F_B_x-represents τ_x, in time O((F_B_x)). If x is a vertex-introduce node, introducing v. We set τ_x'[π∪{{v}}] := τ_y'[π] and τ_x'[π] := 0 for any π in which v does not appear as a singleton. Clearly for any non-crossing partition π, we have that π∪{v} is still non-crossing and thus τ_x' is reduced. Note that by definition, we need to show that F_B_xτ_x' = F_B_xτ_x. In the following derivation we show that this equality holds at the entry corresponding to any arbitrary partition ρ⊢ B_x. ∑_π∼ρτ_x[π] = ∑_π∼ρ {v}∈πτ_y[π∖{{v}}] = ∑_π' ∼ρ|_B_yτ_y[π'] = ∑_π' ∼ρ|_B_yτ_y'[π'] = ∑_π∼ρ {v}∈πτ_y'[π∖{{v}}] = ∑_π∼ρτ_x'[π] §.§.§ Vertex-forget node Let x be a vertex-forget node with a child node y. Suppose that τ_y' is reduced and F_B_y-represents τ_y. We can compute a row τ_x' that is reduced and F_B_x-represents τ_x, in time O((F_B_x)). Let x be a vertex-forget node, forgetting v. We set τ_x'[π] := ∑_π'|_B_x = πτ_y'[π'] Clearly for any non-crossing partition π', we have that π'|_B_x is still non-crossing and thus τ_x' is reduced. Again we now show that F_B_xτ_x' = F_B_xτ_x, by focussing on the entry of the vector at coordinate ρ. ∑_π∼ρτ_x[π] = ∑_π∼ρ∑_π'|_B_x = πτ_y[π'] Note that π' projects down to a partition that is compatible with ρ if and only if π' ∼ (ρ∪{{v}}). We therefore find that = ∑_π' ∼ (ρ∪{{v}})τ_y[π'] = ∑_π' ∼ (ρ∪{{v}})τ_y'[π'] = ∑_π∼ρ∑_π'|_B_x = πτ_y'[π'] = ∑_π∼ρτ_x'[π] §.§.§ Edge-introduce node Let x be an edge-introduce node with a child node y. Suppose that τ_y' is reduced and F_B_y-represents τ_y. We can compute a row τ_x' that is reduced and F_B_x-represents τ_x, in time O((F_B_x)|B_x|^2). Before we prove this lemma, we introduce the following technical lemma. This lemma will be useful to show that representation is preserved after applying the dynamic programming step. Let π, χ, ρ⊢ [n] be partitions such that π∼χ and ρ∼χ. We have that π⊔χ∼ρ if and only if π∼ρ⊔χ. Recall the definition of a representative forest <ref>. Let G_π, G_χ and G_ρ be representative forests of π, χ and ρ respectively. Suppose that π⊔χ∼χ. Since π∼χ, G_π⊕ G_χ is a forest. Moreover it is a representative forest of π⊔χ. By the same reasoning we find that G_ρ⊕ G_χ is a representative forest of ρ⊔χ. Since π⊔χ∼ρ, we find that (G_π⊕ G_χ) ⊕ G_ρ = G_π⊕ (G_χ⊕ G_ρ) is a forest and thus π∼ρ⊔χ. The reverse direction follows from a similar argument. Let x be an edge-introduce node for edge uv. It is not hard to see that if u and v are adjacent in the vertex ordering of B_x, then π⊔π_uv is non-crossing if and only if π is non-crossing. We will aim find a F_B_y-representative τ_y” of τ_y', that is reduced on an ordering p' in which u and v are adjacent. By applying Lemma <ref> to each entry of τ_y' we can find a a F_B_y-representative of τ_y', that is reduced on p ∘ (i, i+1), that is we can swap two consecutive elements. Using at most |B_y| of these swaps we can ensure that u and v are adjacent. Each such swap costs |B_y| time per non-zero entry of the current vector. Since any reduced vector has at most (F_B_y) non-zero entries, we find a runtime of O((F_B_y)|B_y|^2). We can now compute the desired τ_x'. We first define π_uv := {{w} : w ∈ B_y ∖{u,v}}∪{{u, v}} and set τ_x'[π] := τ_y”[π] + ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_y”[π'] which is still reduced on p', since u and v are adjacent. Finally we again show that F_B_xτ_x' = F_B_xτ_x. ∑_π∼ρτ_y[π] = ∑_π∼ρ( τ_x[π] + ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x[π'] ) = ∑_π∼ρ( τ_x[π] ) + ∑_π∼ρ( ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x[π'] ) Since τ_x'F_n-represents τ_x we find = ∑_π∼ρ( τ_x'[π] ) + ∑_π∼ρ( ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x[π'] ) = ∑_π∼ρ( τ_x'[π] ) + ∑_π' ⊔π_uv∼ρ F_n[π', π_uv] τ_x[π'] If ρ∼π_uv, by Lemma <ref> we find = ∑_π∼ρ( τ_x'[π] ) + ∑_π' ∼ρ⊔π_uv( τ_x[π'] ) = ∑_π∼ρ( τ_x'[π] ) + ∑_π' ∼ρ⊔π_uv( τ_x'[π'] ) = ∑_π∼ρ( τ_x'[π] ) + ∑_π∼ρ( ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x'[π'] ) = ∑_π∼ρ( τ_x'[π] + ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x'[π'] ) = ∑_π∼ρτ_y'[π] Otherwise we find π' ⊔π_uv≁ρ for any π' and thus∑_π∼ρτ_y[π] = ∑_π∼ρ( τ_x'[π] ) = ∑_π∼ρ( τ_x'[π] ) + ∑_π∼ρ( ∑_π' ⊔π_uv = π F_n[π', π_uv] τ_x'[π'] ) = ∑_π∼ρτ_y'[π] §.§.§ Join node Let x be a join node with child nodes y_1 and y_2. Suppose that τ_y_i' is reduced and F_B_y_i-represents τ_y_i for i = 1, 2. We can compute a row τ_x' that is reduced and F_B_x-represents τ_x, in time O((F_B_x)^3|B_x|^3). We begin by setting τ_x”[π] := ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_1'[π_1] τ_y_2'[π_2] We will first prove that τ_x” F_B_x-represents τ_x and then reduce it afterwards. ∑_π∼ρτ_x[π] = ∑_π∼ρ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_1[π_1] τ_y_2[π_2] By changing the order in which we pick π, π_1 and π_2 we can rewrite this expression as = ∑_π∼ρ∑_π_1 ≤πτ_y_1[π_1] ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_2[π_2] = ∑_π_1τ_y_1[π_1] ∑_π∼ρ π_1 ≤π∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_2[π_2] We can now merge the two inner sums into one, which results in = ∑_π_1τ_y_1[π_1] ∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2[π_2] If ρ∼π_1, by Lemma <ref> we find∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2[π_2] = ∑_π_2 ∼ρ⊔π_1τ_y_2[π_2] = ∑_π_2 ∼ρ⊔π_1τ_y_2”[π_2] = ∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2”[π_2] Otherwise we find∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2[π_2] = 0 = ∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2”[π_2] Either way we find∑_π∼ρτ_x[π] = ∑_π_1τ_y_1[π_1] ∑_π_1 ⊔π_2 ∼ρ F_n[π_1, π_2] τ_y_2”[π_2] By applying the same operations as before, but in reverse, we find = ∑_π_1τ_y_1[π_1] ∑_π∼ρ π_1 ≤π∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_2”[π_2] = ∑_π∼ρ∑_π_1 ≤πτ_y_1[π_1] ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_2”[π_2] = ∑_π∼ρ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_1[π_1] τ_y_2”[π_2] By a symmetric argument as given so far, we find = ∑_π∼ρ∑_π_1 ⊔π_2 = π F_n[π_1, π_2] τ_y_1”[π_1] τ_y_2”[π_2] = ∑_π∼ρτ_x”[π] We now describe how we reduce τ_x” to find τ_x'. For each partition π such that τ_x”[π] ≠ 0, we first determine an ordering p' for which π is non-crossing. Note that we can transform p' into p by performing at most |B_x|^2 swaps, where we swap the order of two consecutive elements. Again by applying Lemma <ref> to each entry of τ_x” we can find a a F_B_x-representative of e_π·τ_x”[π], that is reduced on p' ∘ (i, i+1). We perform at most O(|B_x|^2) such swaps each costing at most O((F_B_x)|B_x|), since the support of the vector cannot exceed (F_B_x). After we have done this for every such π, we sum the resulting vectors to find an F_B_x-representative τ_x' of τ_x”. Finding the vectors takes O((F_B_x)|B_x|^3) per non-zero entry of τ_x” and thus takes O((τ_x”)(F_B_x)|B_x|^3) time in total. Summing all the vectors takes at most O((F_B_x)(τ_x')) time. Since we assumed τ_y_1' and τ_y_2' to be reduced, we find that (τ_x”) ≤(F_B_x)^2 and thus the algorithm runs in time O((F_B_x)^3|B_x|^3). §.§.§ Algorithmic results The previous lemmas together give the following algorithms. There exists an algorithm that, given a graph G with a path decomposition of width (G), computes the number of forests in the graph in time 4^(G) n^O(1). W.l.o.g. we assume we are given a nice path decomposition, where the first and last nodes correspond to empty bags. As mentioned in the section of leaf nodes we can directly compute a representative solution on the first node. By applying Lemma's <ref>, <ref> and <ref>, we can compute representative solutions for all nodes. The row corresponding to the last node will contain a single entry, which gives the number of forests in the graph. By Lemma's <ref>, <ref> and <ref>, each step in the dynamic program takes at most O((F_(G))(G)^2) = O(4^(G)(G)^1/2) time. Since there are O(n^2) steps we find a total running time of O(4^(G)(G)^1/2n^2) = 4^(G) n^O(1). There exists an algorithm that, given a graph G with a tree decomposition of width (G), computes the number of forests in the graph in time 64^(G) n^O(1). W.l.o.g. we assume we are given a nice tree decomposition, where the leaf nodes correspond to empty bags. We will root this decomposition in of the leaf nodes. As mentioned in the section of leaf nodes we can directly compute a representative solution on the first node. By applying Lemma's <ref>, <ref>, <ref> and <ref>, we can compute representative solutions for all nodes. The row corresponding to the root node will contain a single entry, which gives the number of forests in the graph. By Lemma's <ref>, <ref>, <ref> and <ref>, each step in the dynamic program takes at most O((F_(G))^3(G)^3) = O(64^(G)(G)^-3/2) time. Since there are O(n^4) steps we find a total running time of O(64^(G)(G)^-3/2n^4) = 64^(G) n^O(1). § OTHER CASES In this section we handle the remaining cases mentioned in Theorem <ref>. §.§ The curve H_2 The curve H_2 is equivalent to the partition function of the Ising model. Both our proofs for the upper and lower bound on the complexity will make use of this fact. Computing the Tutte polynomial along the curve H_2 cannot be done in time (2-ϵ)^(G) n^O(1), unless SETH fails. The Tutte polynomial on this curve specializes to the partition function of the Ising model on G = (V,E) <cit.>. Computing this function in its entirety is equivalent to computing the generating function C_G(z) = ∑ _k=0^∞ c_kz^k of the closed subgraphs in G <cit.>. Here c_k gives the the number of closed subgraphs with k edges, i.e. the number of edgesets A ⊆ E such that every vertex has even degree in (V, A) and |A| = k. Computing all coefficients of C_G is clearly not easier than computing the number of closed subgraphs of maximum cardinality. We finally show that computing the number of perfect matchings reduces to computing the number of maximum closed subgraphs, using a reduction from <cit.> which can be slightly altered to only increase the cutwidth by an additive factor of 2. There is a lower bound of 2^(G) for counting perfect matchings, due to <cit.>, which finishes the proof. It remains to show that we can reduce #PerfectMatchings to #MaximumClosedSubgraphs, while increasing the cutwidth by at most 2. First note that if every vertex in G has odd degree, then F is a perfect matching if and only if E ∖ F is a maximum closed subgraph and thus the number of perfect matchings on G is equal to the number of maximum closed subgraphs. We will now construct a graph G' that has the same number of perfect matchings as G, but has only vertices with odd degree. Using the above remark we then find a reduction from #PerfectMatchings to #MaximumClosedSubgraphs. Also note that we can determine whether a graph has at least one perfect matching in polynomial time and thus we may assume that G has at least one perfect matching. Let v_1, v_2, …, v_n be a cut decomposition of G of width (G). Now let v_1^e, v_2^e, …, v_l^e be the vertices with even degree, in order of appearance in the cut decomposition. Since G has at least one perfect matching we find that n is even. Since the number of odd degree vertices in a graph is always even we also find that l is even. We now connect v_2i-1^e to v_2i^e using a 3-star, see figure <ref>, and call the resulting graph G'. Note that every vertex in G' has odd degree and that in a perfect matching the 'dangling' vertex d_i of a 3-star has to be matched to the center c_i of the 3-star. We find that the 3-stars have no effect on the number of perfect matchings of the graph. We find a new cut decomposition by simply inserting the vertices c_i and d_i directly after v_2i-1^e in the cut decomposition. We now prove the following matching upper bound. Let G be a graph with a given tree decomposition of width (G). There exists an algorithm that computes T(G; a, b), for (a,b) ∈ H_2, in time 2^(G)n^O(1). As mentioned in the proof of Theorem <ref>, computing the Tutte polynomial along the curve is equivalent to computing the partition function of the Ising model <cit.>. Computing this function in its entirety is equivalent to computing the generating function C_G(z) = ∑ _k=0^∞ c_kz^k, where c_k gives the the number of closed subgraphs with k edges <cit.>. We can compute the coefficients of this polynomial by computing the following dynamic programming table. Let S[x, p, k] be the number of edgesets A of size |A| = k in the graph below bag B_x such that deg_G[A](v) ≡_2 p(v). Note that the number of entries in the table is 2^(G) n^O(1), since we only need to consider p ∈{0,1}^(G) and k ∈ [n^2]. We may compute new entries as follows where ∅ denotes the empty vector. If B_x is a leaf bag then S[x, ∅, 0] = 1 and S[x, ∅, k] = 0 otherwise. If x is a vertex-forget node for vertex v, then S[x, p, k] = S[y, p', k], where p' is the vector given by p'(u) = p(u) for u ≠ v and p'(v) = 0 and y is the child node of x. If x is a vertex-introduce node for vertex v, then S[x, p, k] = S[y, p_B_y, k] if p(v) = 0 0 otherwise, where y is the child node of x. If x is an edge-introduce node for edge e, then S[x, p, k] = S[y, p, k] + S[y, p +_2 1_e, k + 1], where 1_e(u) = 1 if and only if u is one of the endpoints of e and where we use +_2 to indicate addition in ℤ_2^ℓ for some ℓ. If x is a join node, with children y_1 and y_2, we use fast subset convolution as described in <cit.> to compute S[x, p, k] = ∑_i = 0^k ∑_p_1 +_2 p_2 = p S[y_1, p_1, i]S[y_2, p_2, k-i]. §.§ The curve H_0^x The curve H_0^x contains the point (1,2), which counts the number of connected edgesets of a connected graph. Using existing results this gives an ETH lower bound which matches the running time of the general algorithm described in Theorem <ref>. Let 0 < α < 1. Computing the Tutte polynomial along the curve H_0^x cannot be done in time (α(G)-ϵ)^(1-α)(G)/2 n^O(1), unless SETH fails. In <cit.> a lower bound of p^(G) is found for counting connected edgesets modulo p. In the reduction the authors reduce to counting essentially distinct q-coloring modulo p, with cutwidth (G) + q^2 and p = q. Thus we find a lower bound of p^(G) - p^2 = (α(G))^(1-α)(G)/2 for p = (α(G))^1/2. By Theorem <ref> points on this curve can be computed in time (G)^O((G)) n^O(1). §.§ The curve H_q for q ∈ℤ_≥ 3 These curves contain the points (1-q, 0), which count the number of q-colorings. Using previous results and a folklore algorithm, we find matching upper and lower bounds for these points and thus for the whole curves. Let q ∈ℤ_≥ 3. Computing the Tutte polynomial along the curve H_q cannot be done in time (q-ϵ)^(G) n^O(1), unless SETH fails. Note that H_q contains the point (1-q, 0). Computing the Tutte polynomial on this point is equivalent to counting the number of q-colorings of the graph G. By choosing a modulus p > q we can apply the results from <cit.> to find a lower bound of q^(G) on the time complexity of counting q-colorings modulo p. This lower bound clearly extends to general counting. Let G be a graph with a given tree decomposition of width (G) and q ∈ℤ_≥ 3. There exists an algorithm that computes T(G; a, b) for (a,b) ∈ H_q in time q^(G) n^O(1). This theorem is a direct consequence of combining Theorem <ref> with the following folklore result: Let G be a graph with a given tree decomposition of width (G) and q ∈ℤ_≥ 3. There exists an algorithm that computes the number of q-colorings of G in time q^(G) n^O(1). §.§ The curve H_-q for q ∈ℤ_> 0 These curves contain the points (1 + q, 0). Using the same results we used to prove theorem <ref> and exploiting the fact these results hold for modular counting, we find an ETH lower bound which matches the running time of the general algorithm described in theorem <ref>. Let q ∈ℤ_>0. Computing the Tutte polynomial along the curve H_-q cannot be done in (G)^o((G)) time, unless ETH fails. Like mentioned earlier H_-q contains the point (1+q, 0). For a prime p > q we have that T(G; 1+q, 0) ≡_p T(G; 1+q - p, 0). This means that computing the Tutte polynomial modulo p at the point (1+q, 0) is equivalent to counting the number of p - q-colorings of G modulo p. Since q > 0 and p > q we find that 0 < p - q < p and thus as before, by <cit.>, we find a lower bound of (p-q)^(G). Since the cutwidth of the construction in <cit.> is O(n + rp^r+2) for some r dependant on p-q and ϵ. We find that there is no algorithm running in time O((p - q - ϵ)^(G) - rp^r+2) = O((α(G) - ϵ)^(G) (1-α)/(r+2)), where p-q = (α(G))^1/(r+2). By Theorem <ref> points on this curve can be computed in time (G)^O((G)) n^O(1). § A GENERAL ALGORITHM In this section we show how we can exploit bounded treewidth to compute the Tutte polynomial at any point in the plane, in FPT-time. For this we use a standard dynamic programming approach. A linear (in the input size) time FPT-algorithm has previously been given by Noble <cit.>. This algorithm is double exponential in the treewidth, where the algorithm we give here has a running time of the form 2^O((G) log((G)) n^O(1). We consider this an improvement for our purposes, since we are mainly interested in the dependence on the treewidth. There is an algorithm that, given a graph G and a point (a,b), computes T(G;a,b) in time (G)^O((G)) n^O(1). Note that, in order to compute the Tutte polynomial, we only need to know the number c_i,j of edgesets with i components and j edges, for i,j = 1, …, n. We can then compute T(G;a,b) = ∑_i,j=1^n c_i,j(a-1)^i-k(E) (b-1)^i + j - |V|, in polynomial time. We will now focus on computing the values of c_i,j. Let (R, (B_x)_x ∈ V(R)) be a rooted, nice tree decomposition with root r. We define C_x(π,i,j) as the number of edgesets of the graph covered by the subtree rooted[I.e. all vertices that are in some bag y, such that any path in R from y to r must pass through x.] at bag B_x, with i components j edges and whose connected components give the partition π on B_x. At the leaves of the decomposition we have B_x = and thus C_x(π,i,j) = 1, if π = ∅, i = j = 0, 0, otherwise. If x is an introduce node for vertex v with child y. For N_π(v) the set of vertices in the same set of π, we have C_x(π,i,j) = ∑_∅≠ A ⊆ N_π(v) C_y(π - v,i - |A|,j), if N_π(v) ≠∅, C_y(π - v,i,j - 1), otherwise. If x is a forget node for vertex v with child y, we have C_x(π,i,j) = ∑_π' ∈ 2^B_y, π'|_B_x = π C_y(π',i,j) If x is a join node with children y and z, we have C_x(π,i,j) = ∑_k = 0^i ∑_l = 0^j ∑_π' ⊔π” = π C_y(π,k,l)C_z(π,i-k,j-l), where π' ⊔π” = π indicates that merging any overlapping sets in π' and π” results in π. § CONCLUSION In this paper we gave a classification of the complexity, parameterized by treewidth/pathwidth/cutwidth, of evaluating the Tutte polynomial at integer points into either computable * in polynomial time, * in ^O()n^O(1) time but not in ^o()n^O(1) time, * in q^n^O(1) time but not in 2^o() (and for many points not even in r^n^O(1) time for some constants q > r), assuming the (Strong) Exponential Time Hypothesis. This classification turned out to be somewhat surprising, especially considering the asymmetry between H_0^x = {(x,y) : x = 1 } and H_0^y = {(x,y) : y = 1 } that does not show up in other classifications such as the ones from <cit.>. Our paper leaves ample opportunities for further research. First, we believe that our rank upper bound should have more applications for counting forests with different properties. For example, it seems plausible that it can be used to count all Feedback Vertex Sets in time 2^O()n^O(1) or the number of spanning trees with k components in time 2^O()n^O(1). The latter result would improve over a result by Peng and Fei Wan <cit.> that show how to count the number of spanning forests with k components (or equivalently, n-k-1 edges) in ^O()n^O(1) time. We decided to not initiate this study in this paper to retain the focus on the Tutte polynomial. Second, it would be interesting to see if our classification of the complexity of all points on ℤ^2 can be extended to a classification of the complexity of all points on ℝ^2 (or even ℂ^2). Typically, evaluation at non-integer points can be reduced to integers points (leading to hardness for non-integer points), but we were not able to establish such a reduction without considerably increasing the width parameters. Third, it would be interesting to see if a similar classification can be made when parameterized by the vertex cover number instead of treewidth/pathwidth/cutwidth. We already know that the runtime of 2^nn^O(1) by Björklund et al. <cit.> for evaluating the Tutte polynomial cannot be strengthened to a general 2^O(k)n^O(1) time algorithm where k is the minimum vertex cover size of the input graph due to a result by Jaffke and Jansen <cit.>, but this still leaves the complexity of evaluating at many other points open.
http://arxiv.org/abs/2307.00843v1
20230703083644
Blow-up vs. global existence for a Fujita-type Heat exchanger system
[ "Samuel Tréton" ]
math.AP
[ "math.AP" ]
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics [ August 1, 2023 ============================================================================================================== We analyze a reaction-diffusion system on ℝ^N which models the dispersal of individuals between two exchanging environments for its diffusive component and incorporates a Fujita-type growth for its reactive component. The originality of this model lies in the coupling of the equations through diffusion, which, to the best of our knowledge, has not been studied in Fujita-type problems. We first consider the underlying diffusive problem, demonstrating that the solutions converge exponentially fast towards those of two uncoupled equations, featuring a dispersive operator that is somehow a combination of Laplacians. By subsequently adding Fujita-type reaction terms to recover the entire system, we identify the critical exponent that separates systematic blow-up from possible global existence. Key words: reaction-diffusion system, Fujita blow-up phenomena, critical exponent, global solutions, Heat exchanger system. MSC2020: 35K40PDEs/Parabolic/Second order parabolic systems, 35B44PDEs/Qualitative properties/Blow-up, 35B33PDEs/Qualitative properties/Critical exponent, 92D25Biology and other natural sciences/Population dynamics (general). § INTRODUCTION In this work, we consider the semi-linear Cauchy problem {[ ∂_tu = cΔ u - μ u + ν v + u^1+, t>0, x∈ℝ^N,; ∂_tv = dΔ v + μ u - ν v + κ v^1+, t>0, x∈ℝ^N,; u,v|_t=0 = u_0,v_0, x∈ℝ^N,; ]. in any dimension N≥ 1. Here the parameters c,d,μ,ν,, are positive constants, κ = 0 or κ = 1, and the data u_0 and v_0 are taken non-both-trivial and non-negative in L^1ℝ^N∩ L^∞ℝ^N. This system is motivated by biological issues, particularly the field-road reaction-diffusion system — see SYS_field_road_diff {[ ∂_t=̌ dΔ,̌; -d∂_y|̌_y=0 = μ-̆ν|̌_y=0,; ∂_t=̆ DΔ-̆μ+̆ν|̌_y=0. ]. — for which no Fujita-type result has been established yet. It is well-known that SYS_heat_exchanger {[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. owns a unique maximal solution u,v∈C^10T,L^1ℝ^N∩ L^∞ℝ^N^2, such that u, v converges to u_0, v_0 in L^1ℝ^N^2 as t→ 0. Furthermore, by parabolic regularity, u,v also belongs to C^∞0,T×ℝ^N^2. Importantly, the cooperative nature of the system ensures the validity of the comparison principle. These results are classical and can be found in WeisslerLocal80, FriedmanPartial64 and FifeComparison81 for instance. The main objective of this paper is to determine whether T is finite or not. If T=∞ we refer to u,v as a global solution, while T<∞ enforces lim_t → T^-utL^∞ℝ^N + vtL^∞ℝ^N = ∞, and we say that u,v blows-up in finite time. Back in 1966, Fujita conducted the pioneering work FujitaBlowing66 on the initial value problem {[ ∂_tu = Δ u + u^1+, t>0, x∈ℝ^N,; u|_t=0 = u_0, x∈ℝ^N. ]. He showed that for non-negative and non-trivial datum u_0, there is a critical exponent _F : = 2/N (later referred to as the Fujita exponent) that separates systematic blow-up of the solutions from possible global existence. More precisely, if 0<<_F, any solution to EQ_Fujita {[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. blows-up in finite time. Conversely, if >_F, problem EQ_Fujita {[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. admits global solutions as long as u_0 is chosen sufficiently small. Additionally, the global solutions provided by this case asymptotically vanish in L^∞ℝ^N. The critical case =_F has subsequently been investigated in various works, including HayakawaNonexistence73, KobayashiGrowing77, AronsonMultidimensional78, WeisslerExistence81, and falls within the systematic blow-up regime. The regime transition that occurs at 1/=N/2 can intuitively be explained by observing that the equation in EQ_Fujita {[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. is a combination of the growth phenomenon resulting from the non-linearity and of the dispersive effect arising from the Laplacian. The growth of the solutions to EQ_Fujita {[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. is related to the underlying ODE U'=U^1+ whose solutions have the form Ut = C/T-t^1/ and blow-up with algebraic blow-up rate 1/. On the other hand, solutions to the Heat equation ∂_t=̆Δ$̆ vanish for integrable initial datum, and the sharp uniform controltL^∞ℝ^N≤C/t^N/2indicates that this vanishing occurs with algebraic decay rateN/2. By formally combining the growth and diffusive actions into EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]., Fujita's result states that global solutions can only exist if the diffusion decay rate exceeds the growth blow-up rate, which isN/2 > 1/. Otherwise the dispersive effect of the diffusion is never strong enough to prevent the blowing-up driven by the reaction term. Since the critical case was classified, many works have shown significant interest in EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]., either by deepening the established results, or by exploring variations of the problem or additional effects. A wide range of references can be found in the survey articles by Levine LevineRole90 and Deng and Levine DengRole00, as well as in the books by Hu HuBlowup11 and Quittner and Souplet QuittnerSuperlinear19. The first variation of EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. that takes the form of a system appeared in1991in the work of Escobedo and Herrero EscobedoBoundedness91. In their study, they considered {[ ∂_tu = Δ u + v^1+, t>0, x∈ℝ^N,; ∂_tv = Δ v + u^1+, t>0, x∈ℝ^N.; ]. Similarly to the Fujita's problem EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]., the system SYS_Escobedo{[ ∂_tu = Δ u + v^1+,; ∂_tv = Δ v + u^1+. ]. may be interpreted as the result of the interplay between a growth phenomenon described by the coupled ODE system {[ U' = V^1+, t>0,; V' = U^1+, t>0,; ]. and an uncoupled diffusion process governed by[ ∂_t=̆Δ,̆ t>0, x∈ℝ^N,; ∂_t=̌Δ,̌ t>0, x∈ℝ^N.; ]The results presented in EscobedoBoundedness91 highlight that the transition from systematic blow-up from possible global existence is once again determined by the balance between the growth blow-up rate and the diffusion decay rate. Specifically, for non-negative and non-both-trivial initial valuesU_0, V_0, both components of the solutionU,Vto SYS_Escobedo_growth{[ U' = V^1+,; V' = U^1+. ]. blows-up in finite time with the blow-up ratesa : = 2+/++ for U, and b : = 2+/++ for V.Consequently, for system SYS_Escobedo{[ ∂_tu = Δ u + v^1+,; ∂_tv = Δ v + u^1+. ]. to have global solutions,N/2must be larger than bothaandb, while systematic blow-up occurs as soon asaorbexceedsN/2. The work of Escobedo and Herrero have been extended in several ways. First, the introduction of different diffusion coefficients has been investigated in FilaFujitatype94 by Fila, Levine and Uda. Although they reached the same conclusions as in the case of identical diffusions, it should be noted that considering different diffusion rates significantly increases the complexity of the problem. This draws attention to one of the key challenges involved in working with system SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. which is discussed in more details in Section <ref>. More recently, in YangFujitatype15, problem SYS_Escobedo{[ ∂_tu = Δ u + v^1+,; ∂_tv = Δ v + u^1+. ]. has also been studied by taking some compactly supported non-local diffusion operators instead of the Laplacians. As for the non-linear effects, the case of time-weighted reactionsftv^1+,gtu^1+was tackled in UdaCritical95, CaoCritical14, CastilloCritical15, and that of space-weighted reactionsx^σ_1v^1+,x^σ_2u^1+was discussed in MochizukiExistence98. An expansion of SYS_Escobedo{[ ∂_tu = Δ u + v^1+,; ∂_tv = Δ v + u^1+. ]. with a chain coupling of more than two unknowns was also handled in RenclawowiczGlobal00. Various other types of systems have been developed based on Fujita's problem EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ].. We mention for instance systems for which the reaction terms take the formu^αv^β,u^γv^δstudied in QiGlobal94, EscobedoCritical95, LuGlobal95, SnoussiGlobal02, DicksteinLife07 oru^α+v^β,u^γ+v^δconsidered in CuiGlobal98, SoupletOptimal04, CastilloCritical15. We would like to stress that in all the semi-linear systems mentioned above, the interaction between the unknowns occurs through the non-linear growth terms. However, this is not the case for problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. investigated in this paper. Indeed, as for EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. and SYS_Escobedo{[ ∂_tu = Δ u + v^1+,; ∂_tv = Δ v + u^1+. ]., we can extract from SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. a diffusive component: {[ ∂_t=̆ cΔ-̆μ+̆ν,̌ t>0, x∈ℝ^N,; ∂_t=̌ dΔ+̌μ-̆ν,̌ t>0, x∈ℝ^N,; ]. that we call Heat exchanger and which contains all the coupling parts of SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ].. We interpret system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. as a population dynamics model for a single species dispersing on two parallel environments and switching from one to the other. Observe that SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. preserves the mass of its initial datum over time, that is t + tL^1ℝ^N≡0 + 0L^1ℝ^N, for all t>0. A detailed description of solutions to SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. is given in Theorem <ref>, Corollary <ref> and their surrounding comments. These results are necessary preliminaries to address the systematic blow-up versus possible global existence for the problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ].. Such a model which exchanges individuals between two domains is not new in the literature. In particular, the linear Heat exchanger system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. is influenced by the field-road diffusive model {[ ∂_t=̌ dΔ,̌ t>0, x∈ℝ^N-1, y>0,; -d∂_y|̌_y=0 = μ-̆ν|̌_y=0, t>0, x∈ℝ^N-1,; ∂_t=̆ DΔ-̆μ+̆ν|̌_y=0, t>0, x∈ℝ^N-1,; ]. where,̌=t,x,y,t,x, introduced in2013by Berestycki, Roquejoffre and Rossi in BerestyckiInfluence13 and whom both fundamental solutions and asymptotic decay rate were recently studied in AlfaroFieldroad23. Traditionally, the field-road diffusive model is supplemented with a reaction termfin the field which is represented by the first line of SYS_field_road_diff{[ ∂_t=̌ dΔ,̌; -d∂_y|̌_y=0 = μ-̆ν|̌_y=0,; ∂_t=̆ DΔ-̆μ+̆ν|̌_y=0. ].. As far as we know, no Fujita-type reactionfsuch asfv = v^1+has been considered in the literature. In this case (as with system SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ].), it is worth noting that the equations are coupled through the diffusion process. In this paper, we therefore propose to study problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. as a first step towards investigating these coupled by diffusion Fujita-type systems. This work is structured as follows. In Section <ref>, we start with an introduction of the notations used throughout. Subsequently, we present our results on the diffusive system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. and the semi-linear problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. in Subsection <ref> and Subsection <ref> respectively. The remaining sections are devoted to providing the proofs for the statements outlined in Section <ref>. Specifically, in Section <ref> we prove the results related to linear system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ].. In Section <ref> we address problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. when global existence is achievable. Lastly, we discuss the systematic blow-up case in Section <ref>. § NOTATIONS AND MAIN RESULTS The conventions and notations used in this paper are gathered below. To avoid confusion, we use bold font ($̆, $̌,,) to denote the solutions to diffusive problems, while we use regular font (u,v,σ,δ) for the solutions to non-linear problems. Additionally, ifw = wt,x, the abbreviationwtobviously stands for the functionwt,. We also indicate functions that depend solely on time — such as ODE solutions — in capital letters (U,V,S,D). In the whole document the functionG = Gt,x : = 4πt^-N/2e^-x^2/4tdenotes the Heat kernel inℝ^Nwith unit diffusion rate, and∗is the classical convolution product. For any subsetΩ⊂ℝ^N, we denote∂Ωits topological frontier. In addition, we useB0,Rto denote the ball{x∈ℝ^N such that x<R}, andB0,Rto represent its indicator function. For anyf ∈L^1ℝ^N, the Fourier transform we employ is defined byfξ = f̂ξ : = ∫_ℝ^N^ fx e^-i ξ· x dx,for which holds the inversion formulaf = 2π^-N∘f- as soon asf ∈L^1ℝ^Nand the Hausdorff-Young inequalities fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N, and f̂L^∞ℝ^N≤fL^1ℝ^N. We adopt the notationSℝ^Nto represent the Schwarz space of rapidly decreasing functions that consists of smooth functionsφ: ℝ^N →ℝthat, along with all their derivatives, exhibit faster decay than any polynomial asx→∞. §.§ The linear problem SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. To start with, let us consider the Heat exchanger system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. in the case where all the parametersc,d,μ,νare equal to1. Then the linear transformation, : = +̆,̌-̆enables the uncoupling of the unknowns, and thus, [ t = 1+e^-2t2Gt∗ u_0 + 1-e^-2t2Gt∗ v_0,; t = 1-e^-2t2Gt∗ u_0 + 1+e^-2t2Gt∗ v_0. ] From SYS_heat_exchanger_diff_same_diff_solutions[ t = 1+e^-2t2Gt∗ u_0 + 1-e^-2t2Gt∗ v_0,; t = 1-e^-2t2Gt∗ u_0 + 1+e^-2t2Gt∗ v_0. ], observe that,̆separates into an evanescent part_̆e,_̌ewhich decays exponentially fast, and a persistent part_̆∞,_̌∞, which is the solution to the uncoupled Heat equations [ ∂_t_̆∞ = Δ_̆∞, t>0, x∈ℝ^N,; ∂_t_̌∞ = Δ_̌∞, t>0, x∈ℝ^N.; ] both starting from the averaged datumu_0+v_0/2. In particular, it becomes clear that both$̆ and $̌ converge to zero with algebraic decay rateN/2. Our first primary contribution is to demonstrate that a similar phenomenon occurs in the general case wherec,d,μ,νmay differ. Let ,̆ be the solution to system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. supplemented with non-negative initial datum u_0,v_0, such that u_0, v_0, u_0, v_ 0 are all in L^1ℝ^N. Define, for ξ∈ℝ^N, the radial functions rξ : = c-d/2ξ^2 + μ-ν/2, sξ : = μν + rξ^2, Lξ : = √(sξ) - c+d/2ξ^2 + μ+ν/2, and the dispersal operator ℒ : Sℝ^N→Sℝ^N through the relation ℒf = L ×f, for all f ∈Sℝ^N. Finally, let _̆∞,_̌∞ be the solution to the uncoupled diffusion equations [ ∂_t_̆∞ = ℒ_̆∞, t>0, x∈ℝ^N,; ∂_t_̌∞ = ℒ_̌∞, t>0, x∈ℝ^N,; ] respectively starting from the data u_∞,0 and v_∞,0 defined via their Fourier transforms: û_∞,0 : = 121 - rξ√(sξ)û_0 + ν√(sξ) v̂_ 0, v̂_∞,0 : = 12μ√(sξ) û_0 + 1 + rξ√(sξ)v̂_ 0. Then there are two positive constants k and k' which depend on N,c,d,μ,ν such that [ t-_̆∞tL^∞ℝ^N≤ k u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2,; t-_̌∞tL^∞ℝ^N≤ k' u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2, ] for all t>1. [on Theorem <ref>]   • The operator ℒ—DEF_LLξ : = √(sξ) - c+d/2ξ^2 + μ+ν/2.-DEF_cal_Lℒf = L ×f, for all f ∈Sℝ^N.. By observing the behavior of L at high and low frequencies, we can discern similarities between ℒ and the Laplacian. Specifically, recalling the identity c Δ f = -cξ^2×f, for all f ∈Sℝ^N, an analysis of L near the null frequency and as ξ approaches infinity reveals that ℒ behaves like cν+dμμ+ν^-1Δ at low frequencies and minc,dΔ at high frequencies. Note also that L becomes independent on the exchange parameters μ and ν when the diffusion rates c and d are equal. In this case, ℒ≡ cΔ≡ dΔ. Figure <ref> below presents radial profiles of L for various combinations of the parameters c,d,μ,ν. The first line is the simple case c=d=μ=ν=1, for which Lξ = -ξ^2. Upon examining the different profiles of L in Figure <ref>, we observe two distinct types of shapes that depend on how L changes its behavior from zero to infinity. More precisely, we see soft transitions in the first and second lines of the table, and sharp transitions in the third and last lines. These differences in the shapes of the decay may indicate varying degrees of homogeneity/heterogeneity in how $̆ and$̌ interact with individuals in their respective environments. Indeed, recalling that μ (resp. ν) represents the outgoing exchange rate of $̆ (resp.$̌), we can observe that in the sharp transitions cases only, one of the two unknowns has both a large diffusion rate and an outgoing exchange rate less than or equal to the ingoing one. In this way, the strongest diffuser that primarily conserves its individuals mostly affects the low frequencies of L, its contribution quickly vanishing at high frequencies. FIGURE < g r a p h i c s > [c]14.6079cmFigure — Representation of Lξ for different combinations of c,d,μ,ν. The curves are shown at different scales for readability reasons. The soft transition cases (resp. sharp transition cases) — see the remarks on ℒ above — are displayed in the first and second (resp. third and fourth) lines. The final column provides a visual representation of how $̆ and$̌ exchange and spread the individuals. • The persistent data u_∞,0 and v_∞,0—DEF_u0v0_diff_non_loc_uncoupledû_∞,0 : = 121 - r√(s)û_0 + ν√(s) v̂_ 0, v̂_∞,0 : = 12μ√(s) û_0 + 1 + r√(s)v̂_ 0.. By analyzing DEF_r_and_s_preuve_th_linear[ rξ : = c-d/2ξ^2 + μ-ν/2,; sξ : = μν + rξ^2. ], observe that r/√(s) and 1/√(s) stay bounded between 0 and 1, ensuring the integrability of u_∞,0 and v_∞,0. Therefore, u_∞,0 and v_∞,0 can be defined correctly by DEF_u0v0_diff_non_loc_uncoupledû_∞,0 : = 121 - r√(s)û_0 + ν√(s) v̂_ 0, v̂_∞,0 : = 12μ√(s) û_0 + 1 + r√(s)v̂_ 0. and written using the inversion formula given at the beginning of Section <ref>. When the diffusion rates c and d are equal, expressions of u_∞,0 and v_∞,0 become straightforward linear combinations of u_0 and v_0, and the Fourier transform is no longer required to explain them. When the diffusion rates are different — let us say c<d for simplicity — the positive functions 1+r/√(s)/2, μ/2√(s) and ν/2√(s) collapse algebraically fast in the high frequencies, acting as low-pass filters. Similarly, the positive function 1-r/√(s)/2 acts as a high-pass filter by remaining below the value 1 towards which it converges as ξ goes to infinity. Therefore, we can understand the persistent data u_∞,0 and v_∞,0 as linear combinations of u_0 and v_0 that have been altered by high- and low-pass frequency filters. We now examine the decay rate of,̆which is expected to be algebraic. To reach this, it suffices to know how_̆∞tand_̌∞tdecay inL^∞ℝ^N, given the exponential convergence towards the persistent part_̆∞,_̌∞ensured by Theorem <ref>. The corollary that follows reveals that this decaying occurs with magnitudeN/2, which is tightly related to how the operatorℒaffects the low frequencies of its argument. Under the hypothesis of Theorem <ref>, there are two positive constants ℓ and ℓ' depending on N,c,d,μ,ν, such that [ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2, ] for all t≥ 0. §.§ The semi-linear problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. Having examined the behavior of the pure diffusive Heat exchanger SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. in the previous subsection, we are now prepared to address the question of global existence versus blow-up for our main problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ].. Before proceeding with the analysis, it is worth recalling that the comparison principle ensures that, given the same initial datumu_0,v_0, the solution to SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v,; u,v|_t=0 = u_0,v_0. ].|_κ=0always remains lower than the solution to SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + v^1+,; u,v|_t=0 = u_0,v_0. ].|_κ=1. Consequently, to demonstrate blow-up, it is sufficient to consider SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v,; u,v|_t=0 = u_0,v_0. ].|_κ=0, while to establish global existence, we only need to examine SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + v^1+,; u,v|_t=0 = u_0,v_0. ].|_κ=1. Corollary <ref> states that the solutions to SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. decay uniformly to zero at the algebraic rateN/2. With regard to the uncoupled reaction component of SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]., namely, [ U' = U^1+, t>0,; V' = κ V^1+, t>0, ] it is apparent that, given any positive initial dataU_0,V_0, * U blows-up at the rate 1/ for κ∈0,1, and * V blows-up at the rate 1/ only when κ = 1. As observed in introduction, we anticipate distinguishing between systematic blow-up and possible global existence by comparing the decay rate of SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. with the blow-up rates ofSYS_heat_exchanger_growth[ U' = U^1+,; V' = κ V^1+. ]. To be more specific, the regime transition is expected atN/2 = 1/whenκ=0andN/2 = max1/,1/whenκ=1. These critical exponents can simply be confirmed when the ratesc,d,μ,νare identical. To see this, let us assumec=d=μ=ν=1, and defineσ,δ : = u+v,u-v. Observe that the blowing-up ofσis equivalent to that ofu,v. We first examine the solutions to SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + v^1+,; u,v|_t=0 = u_0,v_0. ].|_κ=1to explore the possible global existence. In an attempt to show thatσmay not blow-up, we have∂_tσ = Δσ + u^1+ + v^1+ ≤ Δσ + σ^1+ + σ^1+ = : Lσ.Thusσserves as a sub-solution to∂_t=L. Under the assumptionmin,>2/N, we can find small enough positive valuesηandRto allow the solutionσto the Cauchy problem {[ ∂_tσ = Δσ + 2σ^ 1+min,, t>0, x∈ℝ^N,; σ|_t=0 = ηB0, R, x∈ℝ^N, ]. to be global and such thatσt≤1for anyt>0. This can be seen from the classical construction of global super-solutions as the product of a time-dependent function with the solution to the Heat equation — see QuittnerSuperlinear19130 for instance. Using this estimation onσ, we write∂_tσ ≥ Δσ + σ^ 1+min,+σ^ 1+max, = Lσ,which indicates thatσis a global super-solution to∂_t=L. Now,σ≤σas long asu_0andv_0are chosen small enough to satisfyu_0+v_0≤ηB0, R, and with that condition met, this case is resolved. For the systematic blow-up (still withc=d=μ=ν=1) we examine SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v,; u,v|_t=0 = u_0,v_0. ].|_κ=0, and observe that∂_tδ = Δδ-2δ + u^1+ ≥ Δδ-2δ,which implies thatδremains greater than the sub-solutione^-2tGt∗u_0-v_0, that is positive as long asu_0>v_0. It is important to note that the assumptionu_0>v_0is not a strict requirement for proving systematic blow-up. Indeed, we may supposeu_0>0(even if we need to shift the time for this to hold) and then work with the solution arising from the initial datumu_0,0. With the established order relationu>vwhich holds for all times, we can determine now thatσsystematically blows-up when<2/Nsince∂_tσ = Δσ + u^1+ ≥ Δσ + u/2 +v/2^1+ = Δσ + σ^1+/2^1+.In those cases of identical rates, the key argument that enables progress concisely stands incΔ u ± dΔ v = cΔu ± v = dΔu ± v.However, these equalities break down when the diffusion rates differ, makingc≠done of the main challenges of the problem. In the general case wherec,d,μ,νare arbitrary, we cannot rely on our knowledge about Fujita's original problem EQ_Fujita{[ ∂_tu = Δ u + u^1+,; u|_t=0 = u_0. ]. to draw conclusions on the lifetime of solutions to SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ].. Theorem <ref> and Theorem <ref> fill this gap and are the main contribution of the present paper. Let N/2> {[ 1/ if κ=0,; max1/, 1/ if κ=1.; ]. Then there is m_0>0 that depends on N,c,d,μ,ν,,,κ such that the solution u,v to problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. with non-negative u_0,v_0 exists for all times as soon as m : = u_0L^1ℝ^N + v_0L^1ℝ^N + û_0L^1ℝ^N+v̂_ 0L^1ℝ^N < m_0. Furthermore, under hypothesis CONTROL_pour_avoir_existence_globalem : = u_0L^1ℝ^N + v_0L^1ℝ^N + û_0L^1ℝ^N+v̂_ 0L^1ℝ^N < m_0., utL^∞ℝ^N≤M/1+t^N/2 and vtL^∞ℝ^N≤M'/1+t^N/2, for all t>0 and some positive constants M and M' depending on N,c,d,μ,ν,,,κ,and m. Let N/2< {[ 1/ if κ=0,; max1/, 1/ if κ=1.; ]. Then any solution u,v to problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. with non-both-trivial and non-negative initial datum u_0,v_0∈L^1ℝ^N∩ L^∞ℝ^N^2 blows-up in finite time.   • Simultaneous versus non-simultaneous blow-up. Based on BLOW_UP_L_inftylim_t → T^-utL^∞ℝ^N + vtL^∞ℝ^N = ∞., it is clear that at least one of the two components of u,v becomes unbounded at the blowing time. Nevertheless, it remains uncertain whether only one component or both components tend to infinity as the blowing-up occurs. The first scenario is called non-simultaneous blow-up, whereas the second is referred to as simultaneous blow-up. Souplet and Tayachi have explored similar issues in SoupletOptimal04 for a Fujita-type system with reactions taking the form of u^α+v^β,u^γ+v^δ. In their study, they succeeded to separate systematic simultaneous from possible non-simultaneous blow-up with conditions between α and γ and between β and δ. As for problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]., we leave this issue as an open question, but we nevertheless emphasize that the integrability in time of the L^∞ℝ^N-norm of the component that goes to infinity could be a decisive factor in determining the behavior of the other component. To illustrate this idea, let us consider the simple case SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v,; u,v|_t=0 = u_0,v_0. ].|_κ=0 with c=d=μ=ν=1 supplemented with non-negative and identically constant datum U_0,V_0. In this case the solution U,V does not depend on space and thus solves the ODE system {[ U' = - U + V + U^1+, t>0,; V' = U - V, t>0. ]. Observe that if we suppose U_0>V_0, then (by subtracting the unknowns) U>V, so that U+V' = U^1+ ≥ U/2 + V/2^1+ = 1/2^1+U+V^1+, and U+V blows-up at some finite time T with at least U becoming unbounded at this time. Now assume the blow-up rate of U to be of magnitude a>0, namely, C/T-t^a ≤ Ut ≤ C/T-t^a, for any t∈0T and some positive constants C≤C. Turning to V, we have e^-tV_0 + C∫_0^te^-t-s/T-s^a ds ≤ Vt ≤ e^-tV_0 + C∫_0^te^-t-s/T-s^a ds. Thus, the boundedness of V at the blowing time is equivalent to the convergence of the integral ∫_0^TT-s^-ads, which occurs as soon as a<1. This conditions express that U must diverge slowly in some sense if we want V to be able to explode with it. Obviously, this is a formal argument and a rigorous proof would require a more detailed analysis of the system. Nevertheless, this example illustrates the importance of the blow-up rate in determining the behavior of solutions near the blowing-up. § ASYMPTOTIC BEHAVIOR OF THE HEAT EXCHANGER SYSTEM This section focuses on the large time behavior of the solution,̆ to the Cauchy problem associated with the linear system SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ].. We begin by providing the proof of Theorem <ref>, which relies on Fourier analysis of the solution. By doing so, we are able to separate,̆ into two parts,_̆∞,_̌∞and_̆e,_̌e, where anL_x^∞ℝ^N/L^1_ξℝ^N-analysis shows that the second part decays exponentially fast. Finally we evaluate the time derivatives of _∞and _∞which brings us to the formulation of problemSYS_diff_non_loc_uncoupled[ ∂_t_̆∞ = ℒ_̆∞,; ∂_t_̌∞ = ℒ_̌∞.; ]-DEF_u0v0_diff_non_loc_uncoupledû_∞,0 : = 121 - r√(s)û_0 + ν√(s) v̂_ 0, v̂_∞,0 : = 12μ√(s) û_0 + 1 + r√(s)v̂_ 0.. Applying the Fourier transform to the equations in SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]., namely ∂_t=̆ cΔ-̆μ+̆νand ∂_t=̌ dΔ+̌μ-̆ν,̌ we are led to the ODE system ∂_t[ ; ] = [ -cξ^2 - μ ν; μ -dξ^2 - ν ] _= : A ξ[ ; ]. It may be checked that the two eigenpairs, λ_+ξ,e_+ξ and λ_-ξ,e_-ξ, of A are given by λ_± = -c+d/2ξ^2+μ+ν/2±√(s) and e± = [ ν; r ±√(s) ], where r and s are defined in DEF_r_and_s_preuve_th_linear[ rξ : = c-d/2ξ^2 + μ-ν/2,; sξ : = μν + rξ^2. ]. As a result, we obtain the following expressions for and : [ t,ξ; t,ξ ] = 1/2[ 1 - r/√(s)e^tλ_++1 + r/√(s)e^tλ_- ν/√(s)e^tλ_+-e^tλ_-; μ/√(s)e^tλ_+-e^tλ_- 1 + r/√(s)e^tλ_++1 - r/√(s)e^tλ_- ][ û_0ξ; v̂_ 0ξ ]. We split and into [ _∞t,ξ; _∞t,ξ ] : = e^tλ_+/2[ 1 - r/√(s) ν/√(s); μ/√(s) 1 + r/√(s) ][ û_0ξ; v̂_ 0ξ ] and [ _et,ξ; _et,ξ ] : = e^tλ_-/2[ 1 + r/√(s) -ν/√(s); -μ/√(s) 1 - r/√(s) ][ û_0ξ; v̂_ 0ξ ], so that =_∞+_e and =_∞+_e, and the boundedness of the matrices inEQ_persistent_part_Fourier_side[ _∞t,ξ; _∞t,ξ ] : = e^tλ_+/2[ 1 - r/√(s) ν/√(s); μ/√(s) 1 + r/√(s) ][ û_0ξ; v̂_ 0ξ ]. andEQ_evanescent_part_Fourier_side[ _et,ξ; _et,ξ ] : = e^tλ_-/2[ 1 + r/√(s) -ν/√(s); -μ/√(s) 1 - r/√(s) ][ û_0ξ; v̂_ 0ξ ]. provides the necessary integrability to go back into the spatial domain. As a result, we can write =̆_̆∞+_̆e and =̌_̌∞+_̌e. The remainder of the proof consists in demonstrating that the evanescent parts _̆et and _̌et decay exponentially fast in L^∞ℝ^N. Following this, we establish that _̆∞,_̌∞ indeed serves as the solution to the Cauchy problemSYS_diff_non_loc_uncoupled[ ∂_t_̆∞ = ℒ_̆∞,; ∂_t_̌∞ = ℒ_̌∞.; ]-DEF_u0v0_diff_non_loc_uncoupledû_∞,0 : = 121 - r√(s)û_0 + ν√(s) v̂_ 0, v̂_∞,0 : = 12μ√(s) û_0 + 1 + r√(s)v̂_ 0.. • Vanishing of _̆et,_̌et. Thanks to the Hausdorff-Young inequalities INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ], controlling _et and _et in L^1ℝ^N enables us to obtain estimates on _̆et and _̌et in L^∞ℝ^N. To accomplish this, we first analyze the components ofEQ_evanescent_part_Fourier_side[ _et,ξ; _et,ξ ] : = e^tλ_-/2[ 1 + r/√(s) -ν/√(s); -μ/√(s) 1 - r/√(s) ][ û_0ξ; v̂_ 0ξ ]. which result in the following estimates for all ξ∈ℝ^N: λ_-≤ -c+d/2ξ^2 + √(μ)+√(ν) ^2/2, 1±r/√(s)≤ 2, ν/√(s)≤√(ν/μ), μ/√(s)≤√(μ/ν), and using INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ], | û_0ξ|≤u_0L^1ℝ^N and | v̂_ 0ξ|≤v_0L^1ℝ^N. Then, by considering the expression for _et inEQ_evanescent_part_Fourier_side[ _et,ξ; _et,ξ ] : = e^tλ_-/2[ 1 + r/√(s) -ν/√(s); -μ/√(s) 1 - r/√(s) ][ û_0ξ; v̂_ 0ξ ]., the estimation of its L^1-norm gives, for t>1, _etL^1ℝ^N = 1/2∫_ℝ^N^1+r/√(s)û_0ξ - ν/√(s) v̂_ 0ξe^tλ_- dξ ≤1/22u_0L^1ℝ^N+√(ν/μ)v_0L^1ℝ^Ne^-t √(μ)+√(ν)^2/2∫_ℝ^N^e^-tc+d/2ξ^2dξ ≤max1,√(ν/4μ) 2π/c+d^N/2u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2, where CONTROL_lambda_moinsλ_-≤ -c+d/2ξ^2 + √(μ)+√(ν) ^2/2., CONTROL_elts_evanescent_part1±r/√(s)≤ 2, ν/√(s)≤√(ν/μ), μ/√(s)≤√(μ/ν). and CONTROL_hat_u0_v0| û_0ξ|≤u_0L^1ℝ^N, | v̂_ 0ξ|≤v_0L^1ℝ^N. have been used to go from the first to the second line. Performing a similar calculation for _et leads to _etL^1ℝ^N≤max1,√(μ/4ν) 2π/c+d^N/2u_0L^1ℝ^N+v_0L^1ℝ^Ne^-t √(μ)+√(ν)^2/2, so that, using again INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ], we can retrieveCONTROL_conv_expo_diff_non_loc_uncoupled[ t-_̆∞tL^∞ℝ^N≤ k u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2,; t-_̌∞tL^∞ℝ^N≤ k' u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2,; for all t>1. ] from CONTROL_hat_ue_L_1_etL^1ℝ^N≤max1,√(ν/4μ) 2π/c+d^N/2u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2. and CONTROL_hat_ve_L_1_etL^1ℝ^N≤max1,√(μ/4ν) 2π/c+d^N/2u_0L^1ℝ^N+v_0L^1ℝ^Ne^-t √(μ)+√(ν)^2/2. with k : = max(1,√(ν/4μ))/2πc+d^N/2 and k' : = max1,√(μ/4ν)/2πc+d^N/2. • Problem satisfied by _̆∞,_̌∞. Taking first t=0 inEQ_persistent_part_Fourier_side[ _∞t,ξ; _∞t,ξ ] : = e^tλ_+/2[ 1 - r/√(s) ν/√(s); μ/√(s) 1 + r/√(s) ][ û_0ξ; v̂_ 0ξ ]. straightly recovers the initial datum u_∞,0,v_∞,0 in DEF_u0v0_diff_non_loc_uncoupledû_∞,0 : = 121 - r√(s)û_0 + ν√(s) v̂_ 0, v̂_∞,0 : = 12μ√(s) û_0 + 1 + r√(s)v̂_ 0.. Then, notice that λ_+ defined in DEF_eigen_elements_matrix_Aλ_± = -c+d/2ξ^2+μ+ν/2±√(s). and L defined in DEF_LLξ : = √(sξ) - c+d/2ξ^2 + μ+ν/2. are actually the same functions. Hence, by working on ∂_t_̆∞ from the frequency domain, we have ∂_t_̆∞ = ∂_t _∞ = λ_+×_∞ = L ×_∞ = ℒ_̆∞, where the last equality precisely comes from the definition of the operator ℒ in DEF_cal_Lℒf = L ×f, for all f ∈Sℝ^N.. The same calculation for ∂_t_̌∞ gives ∂_t_̌∞ = ℒ_̌∞, and thus, _̆∞,_̌∞ solvesSYS_diff_non_loc_uncoupled[ ∂_t_̆∞ = ℒ_̆∞,; ∂_t_̌∞ = ℒ_̌∞.; ]. We now proceed with the proof of Corollary <ref> regarding the decay rate. We start by obtaining uniform controls ontandtfort>1, specifically, [ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] The approach used to deriveCONTROL_heristic_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] involves splitting high versus low frequencies of the solution and recognizing that only the low frequencies contribute algebraically at large times — refer to ChasseigneAsymptotic06 and AlfaroFujita17 for related phenomena. Given this observation, it is sufficient to note thattandtare bounded fort≤1to establishCONTROL_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; for all t>0. ] for all times. Recall that _̆e=-̆_̆∞ and _̌e=-̌_̌∞. Referring toCONTROL_conv_expo_diff_non_loc_uncoupled[ t-_̆∞tL^∞ℝ^N≤ k u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2,; t-_̌∞tL^∞ℝ^N≤ k' u_0L^1ℝ^N+v_0L^1ℝ^N e^-t √(μ)+√(ν)^2/2, ] we can write the following for t>1: [ _̆etL^∞ℝ^N≤ℓ_e .u_0L^1ℝ^N+v_0L^1ℝ^N/t^N/2,; _̌etL^∞ℝ^N≤ℓ'_e .u_0L^1ℝ^N+v_0L^1ℝ^N/t^N/2, ] where ℓ_e and ℓ'_e are positive constants depending on μ, ν and respectively k and k'— hence N,c,d,μ,ν. We then shift to the consistent part of this proof that concerns the estimations of _̆∞ and _̌∞. Based on the Taylor expansion of L near the origin, specifically, Lξ = - cν+dμ/μ+νξ^2 + oξ^2, as ξ→ 0, we can chose a small enough positive constant a (depending on L, hence c,d,μ,ν) so that Lξ≤ - cν+dμ/2μ+νξ^2, as soon as ξ≤ a. Basic calculus may be employed to show that L is radially decreasing. This enables to find a positive constant η (depending on L and a, and consequently on c,d,μ,ν) that guarantees Lξ≤ - η, as soon as ξ≥ a. In the same manner as we demonstrated the vanishing of _̆e,_̌e, we use the Hausdorff-Young inequalities INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ] to control _̆∞tL^∞ℝ^N and _̌∞tL^∞ℝ^N through estimates on _∞tL^1ℝ^N and _∞tL^1ℝ^N. Starting with _̆∞, we have _∞tL^1ℝ^N ≤ ∫_ξ≤ a^ _∞t,ξdξ_ + ∫_ξ≥ a^ _∞t,ξdξ_ = : tL^1ℝ^N + tL^1ℝ^N. For the high frequencies, we express _∞ fromEQ_persistent_part_Fourier_side[ _∞t,ξ; _∞t,ξ ] = : e^tλ_+/2[ 1 - r/√(s) ν/√(s); μ/√(s) 1 + r/√(s) ][ û_0ξ; v̂_ 0ξ ]. with λ_+=L. This yields tL^1ℝ^N = 1/2∫_ξ≥ a^1-r/√(s)û_0ξ + ν/√(s) v̂_ 0ξe^tLξdξ. Next, using controls on LCONTROL_L_high_freqLξ≤ - η, as soon as ξ≥ a. and on 1-r/√(s) and ν/√(s)CONTROL_elts_evanescent_part1±r/√(s)≤ 2, ν/√(s)≤√(ν/μ), μ/√(s)≤√(μ/ν)., we eventually get tL^1ℝ^N≤ e^-tη û_0L^1ℝ^N+√(ν/4μ) v̂_ 0L^1ℝ^N which collapses exponentially fast. For the low frequencies, we similarly have tL^1ℝ^N = 1/2∫_ξ≤ a^1-r/√(s)û_0ξ + ν/√(s) v̂_ 0ξe^tLξdξ. Using controls on LCONTROL_L_low_freqLξ≤ - cν+dμ/2μ+νξ^2, as soon as ξ≤ a., on 1-r/√(s) and ν/√(s)CONTROL_elts_evanescent_part1±r/√(s)≤ 2, ν/√(s)≤√(ν/μ), μ/√(s)≤√(μ/ν)., and on u_0 and v_ 0CONTROL_hat_u0_v0| û_0ξ|≤u_0L^1ℝ^N, | v̂_ 0ξ|≤v_0L^1ℝ^N., we find: tL^1ℝ^N ≤u_0L^1ℝ^N+√(ν/4μ)v_0L^1ℝ^N∫_ξ≤ a^e^-tcν+dμ/2μ+νξ^2dξ ≤2πμ+ν/cν+dμ^N/2.u_0L^1ℝ^N+√(ν/4μ)v_0L^1ℝ^N/t^N/2. Finally, we combine CONTROL_hat_u_infty_hightL^1ℝ^N≤ e^-tη û_0L^1ℝ^N+√(ν/4μ) v̂_ 0L^1ℝ^N. and CONTROL_hat_u_infty_lowtL^1ℝ^N≤2πμ+ν/cν+dμ^N/2.u_0L^1ℝ^N+√(ν/4μ)v_0L^1ℝ^N/t^N/2. and use Hausdorff-Young inequalities INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ] to control _̆∞t in L^∞ℝ^N. By returning toGO_BACK_∞tL^1ℝ^N≤tL^1ℝ^N + tL^1ℝ^N., we can follow the same approach to estimate _̌∞t as well. As a consequence, we can identify two positive constants ℓ_∞ and ℓ'_∞, which depend on N,c,d,μ,ν, and satisfy the following inequalities for t > 1: [ _̆∞tL^∞ℝ^N≤ℓ_∞ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; _̌∞tL^∞ℝ^N≤ℓ'_∞ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] To complete the control of ,̆ for t>1, we combine the estimations on _̆e,_̌eCONTROL_quick_glance_ue_ve_decay_rate_diff[ _̆etL^∞ℝ^N≤ℓ_e .u_0L^1ℝ^N+v_0L^1ℝ^N/t^N/2,; _̌etL^∞ℝ^N≤ℓ'_e .u_0L^1ℝ^N+v_0L^1ℝ^N/t^N/2. ] and _̆∞,_̌∞CONTROL_u_infty_v_infty_decay_rate_diff[ _̆∞tL^∞ℝ^N≤ℓ_∞ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; _̌∞tL^∞ℝ^N≤ℓ'_∞ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] to obtainCONTROL_heristic_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] with ℓ : = ℓ_e+ ℓ_∞ and ℓ' : = ℓ'_e+ ℓ'_∞. Now, for t≤ 1, the comparison principle ensures that ,̆ stays below the solution , to SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. with initial condition u_ 0,v_0≡u_0L^∞ℝ^N,v_0L^∞ℝ^N. Notably, , actually does not depend on the space variable and therefore solves the ODE system {[ ∂_t = - μ + ν, t>0,; ∂_t = μ - ν, t>0. ]. Letting : = +, we clearly have ∂_t≡ 0, so that, max,̆≤max,≤ ≡u_0L^∞ℝ^N+v_0L^∞ℝ^N ≤2π^-N(û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N), where we used the Hausdorff-Young inequalities INEQUALITY_Hausdorff_Young[ fL^∞ℝ^N≤2π^-Nf̂L^1ℝ^N,; f̂L^∞ℝ^N≤fL^1ℝ^N. ] to obtain the last line. As a consequence, we have, [ tL^∞ℝ^N≤2π^-N ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N),; tL^∞ℝ^N≤2π^-N ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N), ] for all t≤ 1. To complete the proof, we need to find suitable values for ℓ and ℓ' such thatCONTROL_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; for all t>0. ] satisfies both estimatesCONTROL_t_leq_one_uv_decay_rate_diff[ tL^∞ℝ^N≤2π^-N ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N),; tL^∞ℝ^N≤2π^-N ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N). ] (for t≤ 1) andCONTROL_heristic_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+ v_0L^1ℝ^N+ û_0L^1ℝ^N+ v̂_ 0L^1ℝ^N.)/t^N/2. ] (for t>1). It can be shown that choosing ℓ : = 2^N/2max2π^-N , ℓ and ℓ' : = 2^N/2max2π^-N , ℓ' is the optimal solution, so that these values inherit the parameters dependency specified in Corollary <ref>. § POSSIBLE GLOBAL EXISTENCE In this section we show the possible existence of global solutions for problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. whenN/2is larger than both1/andκ/, as stated in Theorem <ref>. The proof involves constructing a global super-solutionu,vfor problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. that drivesuandvto0. More precisely, we try ut,x : = Ft×t,x and vt,x : = Ft×t,x, where,̆is the solution to the pure diffusive Heat exchanger SYS_heat_exchanger_diff{[ ∂_t=̆ cΔ-̆μ+̆ν,̌; ∂_t=̌ dΔ+̌μ-̆ν.̌ ]. with initial conditionu_0,v_0, andFis a positive, bounded and continuously differentiable function to be determined. We then search for sufficient conditions onFthat guaranteeu,vis a global super-solution. This leads to CONTROL_pour_avoir_existence_globalem : = u_0L^1ℝ^N + v_0L^1ℝ^N + û_0L^1ℝ^N+v̂_ 0L^1ℝ^N < m_0. which requires the datumu_0,v_0to be small in some sense. Finally, the upper controlu,v≤supF×,̆retrieves CONTROL_solution_globale[ utL^∞ℝ^N≤M/1+t^N/2, vtL^∞ℝ^N≤M'/1+t^N/2,; for all t>0. ]. Observe that our primary focus is on problem SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + v^1+,; u,v|_t=0 = u_0,v_0. ].|_κ=1to prove Theorem <ref>. However, we also investigate the simpler case SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v,; u,v|_t=0 = u_0,v_0. ].|_κ=0to obtain more accurate values for the constantsm_0,MandM'whenκ=0. Consider u,v as defined in DEF_guess_sur_solut,x : = Ft×t,x and vt,x : = Ft×t,x.. First, we set F0=1 ensuring that u,v and u,v have the same initial data. Next, we require the following expressions to be positive for all t>0 and x∈ℝ^N: ∂_tu - cΔu + μu - νv - u^1+ and ∂_tv - dΔv - μu + νv - κv^1+. This leads to F'≥ F^1+×^̆ and F'≥κ× F^1+×^̌. Note that $̆ and$̌ can be replaced in a1F'≥ F^1+×^̆ and F'≥κ× F^1+×^̌. by their L^∞ℝ^N-norms and any control from above of them. Therefore, with Corollary <ref> providing uniform controls on $̆ and$̌, we can say that satisfying the following inequalities is sufficient to recover a1F'≥ F^1+×^̆ and F'≥κ× F^1+×^̌.: F'≥ F^1+×ℓ m/1+t^N/2^ and F'≥κ× F^1+×ℓ' m/1+t^N/2^. Moving forward, we split the discussion into two parts based on whether κ is 0 or 1. • The case κ=0. This case is the simplest of the two since it is sufficient to ask F' = F^1+×ℓ m^/1+t^N/2 if we require a2F'≥ F^1+×ℓ m/1+t^N/2^ and F'≥κ× F^1+×ℓ' m/1+t^N/2^. By solving the ODE a3F' = F^1+×ℓ m^/1+t^N/2. with F0=1, we obtain Ft = [1 - 2ℓ m^/N-21-1/1+t^N/2-1_=: G_0t.]^-1/ It remains to ensure that F exists for all times which is permitted if and only if G_0 does not collide 0. To achieve this, we first need to assume that we are in the regime PF_possible_extinctionN/2> {[ 1/ if κ=0,; max1/, 1/ if κ=1.; ]. to guarantee the vanishing of 1/1+t^N/2-1 in a4Ft = [1 - 2ℓ m^/N-21-1/1+t^N/2-1_Call that G_0t]^-1/. Then, because inf_t≥ 0G_0t = 1-2ℓ m^/N-2, it suffices to chose m<m_0 : = N-2/2ℓ^^1/ to make G_0 positive and so u,v global. To eventually retrieve the controls in CONTROL_solution_globale[ utL^∞ℝ^N≤M/1+t^N/2, vtL^∞ℝ^N≤M'/1+t^N/2,; for all t>0. ], we combine a4Ft = [1 - 2ℓ m^/N-21-1/1+t^N/2-1_Call that G_0t]^-1/-a4infinf_t≥ 0G_0t = 1-2ℓ m^/N-2. andCONTROL_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; for all t>0. ] in Corollary <ref>, which lead us to set M : = N-2ℓ m^/N-2-2ℓ m^^1/ and M' : = N-2ℓ' m^/N-2-2ℓ m^^1/, completing the proof in this case. • The case κ=1. Similar to the previous case, we aim to set an ODE on F, like a3F' = F^1+×ℓ m^/1+t^N/2., that would satisfy both ODIs in a2F'≥ F^1+×ℓ m/1+t^N/2^ and F'≥κ× F^1+×ℓ' m/1+t^N/2^. With the loose assumption m<1, it is clear that maxm/1+t^N/2^, m/1+t^N/2^ ≤m/1+t^N/2^min,. Moreover, a1F'≥ F^1+×^̆ and F'≥κ× F^1+×^̌. requires F' to be non-negative, so Ft≥ F0 = 1 implies max F^1+, F^1+≤ F^1+max,. As a result, it is sufficient to ask F' = F^1+max,×maxℓ^,ℓ'^×m/1+t^N/2^min, for F to satisfy a2F'≥ F^1+×ℓ m/1+t^N/2^ and F'≥κ× F^1+×ℓ' m/1+t^N/2^. The remainder of the proof is nearly identical to the case κ=0. Solving a5F' = F^1+max,×maxℓ^,ℓ'^×m/1+t^N/2^min,. yields Ft=[1 - 2max,maxℓ^,ℓ'^m^min,/Nmin,-21-1/1+t^Nmin,/2-1_ = :G_1t.]^-1/max, To ensure the global existence of F, we first need to be in the regime PF_possible_extinctionN/2> {[ 1/ if κ=0,; max1/, 1/ if κ=1.; ]. to guarantee the vanishing of 1/1+t^Nmin,/2-1 in a6Ft=[1 - 2max,maxℓ^,ℓ'^m^min,/Nmin,-21-1/1+t^Nmin,/2-1_Call that G_1t]^-1/max,. Then, because inf_t≥ 0G_1t = 1-2max,maxℓ^,ℓ'^m^min,/Nmin,-2, it suffices to chose m<m_0 : = min1,Nmin,-2/2max,maxℓ^,ℓ'^^1/min, to make G_1 positive and thus u,v global. It eventually remains to recover the controls in CONTROL_solution_globale[ utL^∞ℝ^N≤M/1+t^N/2, vtL^∞ℝ^N≤M'/1+t^N/2,; for all t>0. ]. To do this, we combine a6Ft=[1 - 2max,maxℓ^,ℓ'^m^min,/Nmin,-21-1/1+t^Nmin,/2-1_Call that G_1t]^-1/max,-a6infinf_t≥ 0G_1t = 1-2max,maxℓ^,ℓ'^m^min,/Nmin,-2. andCONTROL_uv_decay_rate_diff[ tL^∞ℝ^N≤ℓ ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; tL^∞ℝ^N≤ℓ' ( u_0L^1ℝ^N+v_0L^1ℝ^N+û_0L^1ℝ^N+v̂_ 0L^1ℝ^N.)/1+t^N/2,; for all t>0. ] in Corollary <ref> which leads us to take M : = Nmin,-2ℓ m^max,/Nmin,-2-2max,maxℓ^,ℓ'^m^min,^1/max, and M' : = Nmin,-2ℓ' m^max,/Nmin,-2-2max,maxℓ^,ℓ'^m^min,^1/max, which concludes the proof of this second case. § SYSTEMATIC BLOW-UP In this section, we tackle the proof of Theorem <ref> which states the blowing-up of the non-negative solutions to SYS_heat_exchanger{[ ∂_tu = cΔ u - μ u + ν v + u^1+,; ∂_tv = dΔ v + μ u - ν v + κ v^1+,; u,v|_t=0 = u_0,v_0. ]. when at least one of the two1/andκ/is greater thanN/2. We cover both cases ofκ=0andκ=1even though the first one would suffice. To do this we need the slight hypothesis<2/Nwhich is transparent for the theorem assumptions, up to putκbeforeu^1+rather thanv^1+whenever<2/N≤. Notice thatuandvexchange their roles in this case. Our method involves passing the solutionu,vthrough a Gaussian blur whose intensity is adjusted via a positive parameterε. Asεdecreases, the blur increases. We denote the resulting blurred solution observed at pointx=0as,V. We first observe that the blowing-up of,Vimplies that ofu,v. Then, we show that,Vsatisfies an ODI system which ensures its blowing-up ifεis chosen sufficiently small. For technical reasons we start by adjusting the datum u_0,v_0 to give it a specific shape. This step is non-limiting, thanks to the comparison principle. First, note that (by shifting time if necessary) we can assume that u_0 and v_0 are positive. As a consequence, there exists small enough η, R>0 such that ηB0,R≤ u_0 and μ/2ν×ηB0,R≤ v_0 almost everywhere in ℝ^N. Therefore, we can assume without loss of generality that the datum u_0,v_0 takes the form u_0,v_0≡ηB0,R,μ/2ν×ηB0,R. For ε>0, we define the family of Gaussian kernels Φ_ε_ε>0 for all x∈ℝ^N as Φ_εx : = Cε× e^-εx^2, where Cε : = ε/π^N/2 ensures that Φ_εL^1ℝ^N=1 for any ε>0. By computing for λ>0, Δ+λΦ_ε = 4ε^2x^2-2ε N+λΦ_ε ≥-2ε N+λΦ_ε, it becomes evident that ΔΦ_ε≥ -λΦ_ε if we choose λ = 2Nε, an equality that is maintained throughout the proof. Now we use Φ_ε to blur the solution u,v by convolution, and we denote the result evaluated at time t and point x=0 as ,V. More precisely, [ = t : = Φ_ε∗ ut0 = ∫_ℝ^N^Φ_εz ut,zdz,; V = Vt : = Φ_ε∗ vt0 = ∫_ℝ^N^Φ_εz vt,zdz. ] Since Φ_ε is chosen with unit mass, we have, while u,v exists in L^∞ℝ^N^2, t + Vt≤utL^∞ℝ^N+ vtL^∞ℝ^N. As a consequence of this inequality, the blowing-up of ,V yields that of u,v— possibly at an earlier time. The objective is now to make ,V blow-up. Differentiating with respect to time yields (the parameters dependency is locally dropped for better visibility) ' = Φ_ε∗cΔ u - μ u + ν v + u^1+ = cΔΦ_ε∗ u - μΦ_ε∗ u + νΦ_ε∗ v + [Φ_ε∗ u^1+] ≥ -cλΦ_ε∗ u - μΦ_ε∗ u + νΦ_ε∗ v + Φ_ε∗ u^1+ = -μ+cλ + νV + ^1+, where we use an integration by part to go from the first to the second line and DEF_sous_fonction_propreΔΦ_ε≥ -λΦ_ε. along Jensen inequality to go from the second to the third line. Applying the same approach to V leads us the following ODI system {[ ' ≥ -μ+cλ + νV + ^1+, t>0,; V' ≥μ -ν+dλV + κV^1+, t>0. ]. Due to its cooperative structure, the system ODI_system_U_V_eps{[ ' = -μ+cλ + νV + ^1+,; V' = μ -ν+dλV + κV^1+.; ]. enjoys the comparison principle. As a result, and V are respectively above U and V which solve the associated ODE system {[ U' = -μ+cλU + ν V + U^1+, t>0,; V' = μ U -ν+dλ V, t>0,; ]. as long as ,V and U,V start from the same initial datum, that is U0,V0 = ∫_ℝ^N^Φ_εz u_0zdz, ∫_ℝ^N^Φ_εz v_0zdz = : U_0,V_0. From this point, the rest of the demonstration consists in showing that U,V blows-up if the blur parameter ε=λ/2N is chosen sufficiently small. In Figure <ref> below, we present the phase plane associated with ODE system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]., and we take this opportunity to introduce some notations that can be understood by looking at Figure <ref>. We begin by finding the isocline curves for system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]. that are [ U' = 0 = V = U/νμ+cλ-U^,; V' = 0 = V = μ/ν+dλ U. ] These curves intersect each other at the equilibria O : = 0,0 and E_1 : = χ, μ/ν+dλχ, where χ := μ+cλ-μν/ν+dλ^1/. Next, we define E_0 : = μ^1/,0, and for any ∈01, E_ : = E_0 + E_0E_1 = μ^1/1-+χ , μ/ν+dλχ. We consider then the open set Ω which is the region above V=0, below V'=0, and on the right-hand side of the line E_0E_1. More precisely, Ω : = V>0∩V<μ/ν+dλ U∩V>μχ/ν+dλχ-μ^1/U-μ^1/. Finally, we name P and Q the two components of vector field associated with ODE system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]., namely, [ PU,V : = -μ+cλU + ν V + U^1+,; QU,V : = μ U -ν+dλ V, ] and we denote γ_ the evaluation of the field P,Q at point E_. FIGURE < g r a p h i c s > [c]14.6079cmFigure — Phase plane associated with ODE system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ].. The equilibria are located at O = 0,0 =-0.35mm < g r a p h i c s > and E_1 = χ, μ/ν+dλχ =-0.35mm < g r a p h i c s > . The plane splits into two zones depending on whether the trajectories converge to O or blow-up in finite time. As λ approaches zero, the equilibrium E_1 moves towards O until they merge at the limit, and the blow-up zone fills the area below the isocline V'=0, which is defined by V'>0. To characterize the blow-up zone, we define E_0 = μ^1/, 0 =-0.35mm < g r a p h i c s > , and we show that all the points in the region Ω, defined as the intersection of the right-hand side of E_0E_1, the upper side of V=0 and the lower side of V'=0 are indeed in the blow-up zone provided that λ has been sufficiently reduced. The continuation of our discussion is divided into three steps: ProofTHBUS () We show that by choosing λ=2Nε smaller, the region Ω remains stable under ODE system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.. ].. ProofTHBUS () We deduce that any solution to ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]. starting at initial time in Ω blows-up in finite time. ProofTHBUS () Finally, we find an ε>0 that places U_0,V_0 in Ω, leading to the blowing-up of U,V. • Step (<ref>). To demonstrate that Ω is stable under the ODE system ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]., we must establish that the vector field P,Q|_∂Ω points inwards Ω. We decompose ∂Ω into Γ_1∪Γ_2∪Γ_3, as illustrated in Figure <ref>: ∂Ω = U>μ^1/,V=0_Γ_1∪U>χ, V'=0_Γ_2∪E_0E_1_Γ_3. We observe that P,Q|_Γ_1 as well as P,Q|_Γ_2 both point in the correct direction since * Q>0 along Γ_1, and * Q=0<P along Γ_2. To address the last portion of ∂Ω, which is Γ_3 = E_0E_1, we define the matrices M_ : = [ γ_ E_0E_1 ], and our goal is to prove detM_≥ 0 for all ∈01, indicating that γ_ points towards the right-hand side of E_0E_1. Without going into the details of algebraic computations, we find detM_ = μ/ν+dλχ-(μ+cλ)(μ^1/1-+χ) +μν/ν+dλχ+(μ^1/1-+χ)^1+ +(μ^1/-χ)μ^1+1/(1-), where we can verify that detM_1 = 0, what is consistent with γ_1 = 0,0 at the equilibrium E_1. Now, differentiating detM_ with respect to yields ∂_ detM_=μ/ν+dλχ[(μ+cλ)(μ^1/-χ) +μν/ν+dλχ -(1+)(μ^1+-χ)(μ^1+-μ^1+-χ)^] -μ^1+1/(μ^1/-χ), and by considering that χ, defined in DEF_chiχ := μ+cλ-μν/ν+dλ^1/., vanishes as λ approaches 0, we have lim_λ→ 0sup_θ∈01∂_ detM_ + μ^1+2/ = 0. From this we deduce that there exists a positive λ_0 such that ∂_ detM_<0 for all λ∈0λ_0 and all θ∈01. This implies detM_ > detM_1=0, which concludes the step. • Step (<ref>). Suppose we have U_0,V_0∈Ω. Then, thanks to step (<ref>), we know that U,V remains in Ω as long as it exists. Observe that Q>0 in Ω which causes V to increases. Since the set Ω∩U'<0 does not contain any asymptotically stable trajectories, there exists a time t_0>0 for which U,V crosses the curve U'=0. After this point, U,V∈Ω∩U'>0, for all t∈t_0T, where T denotes the lifetime of U,V. From the moment t_0, U is therefore increasing. If U were bounded, it would converge, so would V in view of the phase plane, which is impossible. Consequently, U is unbounded and we can find a time t_1>t_0 that is large enough to perform -μ+cλU + U^1+ > 1/2 U^1+, for all t∈t_1T. Finally, by plugging the latter inequality in first line of ODE_system_U_V{[ U' = -μ+cλU + ν V + U^1+,; V' = μ U -ν+dλ V.; ]., we find U' > 1/2 U^1+ + ν V > 1/2 U^1+, for all t∈t_1T, from which the blowing-up of U is evident. • Step (<ref>). The aim of this step is to show that U_0,V_0 is in Ω if we reduce ε=λ/2N. To achieve this, we must show that U_0,V_0 belongs to each set constituting the intersection Ω in DEF_OmegaΩ : = V>0∩V<μ/ν+dλ U∩V>μχ/ν+dλχ-μ^1/U-μ^1/.. At the beginning of the proof, we assume that the datum u_0,v_0 takes the form given in DEF_shaped_datau_0,v_0≡ηB0,R,μ/2ν×ηB0,R, where we have v_0 = μ/2νu_0. Observing the definition of U_0,V_0 in DEF_data_U0_V0U0,V0 = ∫_ℝ^N^Φ_εz u_0zdz, ∫_ℝ^N^Φ_εz v_0zdz = : U_0,V_0. reveals that V_0 = μ/2νU_0 as well. Therefore, on the phase plane of Figure <ref>, the datum U_0,V_0 is located on the line V = μ/2νU which lies below the isocline V' = 0 if λ is chosen sufficiently small — check DEF_isoclines[ U' = 0 = V = U/νμ+cλ-U^,; V' = 0 = V = μ/ν+dλ U. ] for confirmation. As a result, there exists λ_1>0 such that U_0,V_0∈V<μ/ν+dλ U, for all λ<λ_1. Next, to place U_0,V_0 on the right-hand side of E_0E_1, we must ensure that V_0>μχ/ν+dλχ-μ^1/U_0-μ^1/. Since χ vanishes as λ approaches to 0, we may check that the quantity μχ/ν+dλχ-μ^1/ is negative if λ is sufficiently small. Therefore, PROOF_BU_1V_0>μχ/ν+dλχ-μ^1/U_0-μ^1/. is satisfied if V_0>μ^1+1/χ/ν+dλμ^1/-χ. We can also remove dλ and simply require V_0 >μ^1+1/χ/νμ^1/-χ =μ/νχ + oχ. By further reducing λ (and consequently χ), it is thus sufficient to have V_0 > 2μ/νχ = 2μ/νμ+cλ-μν/ν+dλ^1/ = 2μ/νc+dμ/νλ + oλ^1/ which we can simplified in V_0 > 2^1+1/μ/νc+dμ/ν^1/_= : hλ^1/. Returning to the definition of V_0 in DEF_data_U0_V0U0,V0 = ∫_ℝ^N^Φ_εz u_0zdz, ∫_ℝ^N^Φ_εz v_0zdz = : U_0,V_0. with v_0 in DEF_shaped_datau_0,v_0≡ηB0,R,μ/2ν×ηB0,R and Φ_ε in DEF_Gaussian_blurΦ_εx : = Cε× e^-εx^2., the latter inequality PROOF_BU_2V_0 > 2^1+1/μ/νc+dμ/ν^1/_= : hλ^1/ can be expressed as ε/π^N/2ημ/2ν∫_B0,R^ e^-εz^2 dz > h2Nε^1/, or with an adequate constant h that depends on N,c,d,μ,ν,,η, ∫_B0,R^ e^-εz^2 dz > h×ε^1/-N/2. Now as ε approaches 0, the left-hand side of PROOF_BU_3∫_B0,R^ e^-εz^2 dz > h×ε^1/-N/2. converges towards the Lebesgue measure of the ball B0,R that is positive. Meanwhile, on the right-hand side, ε^1/-N/2 collapses since has been chosen smaller than 2/N. As a result, by taking ε=λ/2N sufficiently small, we can achieve PROOF_BU_3∫_B0,R^ e^-εz^2 dz > h×ε^1/-N/2. and thus PROOF_BU_1V_0>μχ/ν+dλχ-μ^1/U_0-μ^1/., meaning there exists λ_2>0 such that U_0,V_0∈V>μχ/ν+dλχ-μ^1/U-μ^1/, for all λ∈0λ_2. Lastly, noting the positivity of V_0 and considering DEF_OmegaΩ : = V>0∩V<μ/ν+dλ U∩V>μχ/ν+dλχ-μ^1/U-μ^1/., BEING_IN_OMEGA_1U_0,V_0∈V<μ/ν+dλ U, for all λ<λ_1. and BEING_IN_OMEGA_2U_0,V_0∈V>μχ/ν+dλχ-μ^1/U-μ^1/, for all λ∈0λ_2., we can conclude that there exists a positive λ=2Nε for which U_0,V_0∈Ω. Gathering the elements of the proof from steps (<ref>), (<ref>) and (<ref>), we demonstrated that we can find a positive ε for which U,V blows-up in finite time, consequently inducing the blowing-up of ,V and u,v. This concludes the proof. Acknowledgement. The author would like to acknowledge his supervisor https://alfaro.perso.math.cnrs.fr/Matthieu Alfaro for his advices and his reviews, https://www.math.univ-paris13.fr/ souplet/Philippe Souplet for pointing out the references CuiGlobal98, SoupletOptimal04, CastilloCritical15, the https://lmrs.univ-rouen.fr/enLaboratoire de Mathématiques Raphaël Salem for the Maple™ licence which helped to investigate around linear system ODE_Diffusion_Fourier_side∂_t[ ; ] = [ -cξ^2 - μ ν; μ -dξ^2 - ν ][ ; ]., and the Région Normandie for the financial support of his PhD. empty
http://arxiv.org/abs/2307.01920v1
20230704210719
Siamese Learning-based Monarch Butterfly Localization
[ "Sara Shoouri", "Mingyu Yang", "Gordy Carichner", "Yuyang Li", "Ehab A. Hamed", "Angela Deng", "Delbert A. Green II", "Inhee Lee", "David Blaauw", "Hun-Seok Kim" ]
eess.SP
[ "eess.SP" ]
A topological gap theorem for the π_2 – systole of PSC 3-manifolds Kai Xu ================================================================== A new GPS-less, daily localization method is proposed with deep learning sensor fusion that uses daylight intensity and temperature sensor data for Monarch butterfly tracking. Prior methods suffer from the location-independent day length during the equinox, resulting in high localization errors around that date. This work proposes a new Siamese learning-based localization model that improves the accuracy and reduces the bias of daily Monarch butterfly localization using light and temperature measurements. To train and test the proposed algorithm, we use 5658 daily measurement records collected through a data measurement campaign involving 306 volunteers across the U.S., Canada, and Mexico from 2018 to 2020. This model achieves a mean absolute error of 1.416^∘ in latitude and 0.393^∘ in longitude coordinates outperforming the prior method. Light-level geolocalization, Siamese learning, Contrastive learning § INTRODUCTION GPS-less multi-sensor data geolocators <cit.> are miniaturized tracking devices that periodically collect sunlight, temperature, and other data to enable tracking of animals and even insects that migrate long (>1000 km) distances. Determining the daily location and trajectory of small animal and insect migrations is essential for understanding ecology, species interactions, and the impact of climate change on animals. For Monarch butterfly tracking, where a GPS receiver is not feasible because of its excessive power, size, and weight of the data logger, sunlight and temperature-based localization techniques have been investigated as a potential solution <cit.>. Most prior work using light and temperature-based localization typically estimates the daily geo-coordinate based on a `threshold method' <cit.> or `template-fit' model <cit.>. In the threshold method, sunset and sunrise times are computed based on the moment when solar irradiance crosses a predefined threshold level <cit.>. Longitude is then estimated by the time of local noon and latitude by the measured day length. However, since latitude depends on the sun elevation angle, the estimation error is date- and location-dependent. The template-fit approach uses an analytical or data-driven template function for the location-dependent light variation to map the input data into latitude and longitude coordinates for a particular day. However, these methods typically have significant latitude ambiguity around equinox days due to the low day length variation everywhere on Earth. To alleviate these issues, Yang <cit.> presents a deep neural network (DNN)-based localization algorithm in which light and temperature data are fed into neural network models that estimate the daily position by approximating a likelihood probability. This algorithm can offer reduced estimation errors by combining the temperature and light-based probability estimation/heatmaps. However, it still has a relatively high latitude error, especially around the equinox when Monarch butterflies are actively migrating. In this work, we propose a Siamese learning-based localization approach to estimate the location of the data loggers attached to Monarch butterflies collecting light intensity and temperature data. Our framework attempts to learn a general pattern representation of data and produce a pairwise similarity score for two given inputs, which quantifies the probability of proximity of their data collection locations. By evaluating the similarity between the sensor collected data (from unknown location) and the reference data (with known location) in the database, the proposed method estimates the daily butterfly location more reliably outperforming the prior art <cit.>. We implement our algorithm for daily localization of Monarch butterflies that migrate from Canada to Mexico during September – December each year. For training and testing the proposed network models, we use the data collected by a measurement campaign with 306 volunteers <cit.> to record the sunlight intensity and temperature from 2018 to 2020. Our method is applicable to the mSAIL <cit.> data logger, which is customized for Monarch butterfly attachment with a system size of 8 × 8 × 2.6mm^3 and the weight of 62mg (Fig. <ref> top left). Our experimental results performed on the volunteer data show that the proposed algorithm can significantly reduce the localization error around the equinox (from 3.52^∘ to 1.74^∘) and increase the robustness of the results compared to the state-of-the-art <cit.>. Our code is available at <https://github.com/sarashoouri/Siamese_Monarch>. § METHOD Our aim is to estimate the probability that a particular pair of light intensity L_t and temperature T_t measurements belongs to a 2D coordinate x (latitude and longitude) on a specific date d_t. Thus, given a search grid G containing all possible 2D coordinates, the estimated position x_est on date d_t can be computed using the maximum likelihood, x_est = _x∈ G p(L_t, T_t|x). The likelihood value is estimated using two different Siamese neural network models: a light intensity model to compute p(L_t|x), and a temperature model to compute p(T_t|x). Similar to <cit.>, we make a simplifying assumption that light and temperature generate conditionally independent probabilities for a given coordinate such that p(L_t, T_t|x)≈ p(L_t|x)p(T_t|x) holds. We recognize that this assumption is not generally valid, but this simplification is necessary for our model training because of the data availability imbalance between the light and temperature data (i.e., it is difficult to fully characterize the correlation between them). While the temperature data is available from relatively densely populated weather stations, light intensity data is not measured/reported by weather stations. Hence, light intensity data is entirely obtained through volunteer data collections, which are only available for particular dates and on coarser coordinates. Our proposed algorithm has two main stages, 1) Data pre-processing, and 2) Contrastive Siamese learning <cit.>-based localization. Fig. <ref> shows the overall algorithm. §.§ Data pre-processing Monarchs rest during the night without moving. Thus, we perform daily localization utilizing the light and temperature data centered around (± 9 hours) the night center. The proposed data pre-processing method consists of two functions: a night center computing function N and a time-shift function r. The night center computing function is designed separately for light (N_L) and temperature data (N_T). The function r(I,N) time-shifts the data I such that it is centered around the night center N. To compute the night center for a given light intensity record L for a day, we divide L into two parts based on a hypothetical center point and calculate the cross-correlation between the two separated parts. Then, the hypothetical center point is adjusted until the cross-correlation reaches the maximum value, meaning that the divided parts have the highest symmetry. The time point that maximizes the cross-correlation value is assigned as the night center for L. Hence, the night center computing function N_L takes L as input and estimates the night center n_c=N_L(L). Applying a similar method to the temperature measurement is unreliable since the temperature is often asymmetric around the night center. Hence, to align a temperature record T of one day to its night center, we use an astronomical equation MATLAB function <cit.> to calculate the night center n_c for the location x and date d. The night center n_c for T obtained at a (hypothetical) location x and date d is calculated by n_c=N_T(x,d), where N_T is the astronomical equation-based night center calculation function. We collect temperature data T with a 1-hour interval to match the weather station's time resolution. We then produce a pre-processed temperature measurement T̂ by time-shifting it (via r) to be centered around the night center: T̂=r(T,N_T(x,d)) given a (hypothetical) location x and a measurement date d. On the other hand, light intensity L data for each day is collected from the sensor with a 1-minute interval, and is converted to log scale to emphasize the low light level variations around the sunrise and sunset. Note that the light intensity data collected from a sensor attached to a butterfly is prone to environmental variations that can lead to incorrect localization results. Thus, we first apply a denoising adversarial autoencoder (DAAE) <cit.> to L to construct a denoised light record. The pre-processed light data L̂, is thus obtained through the denoising DAAE (Ψ), night center (N_L), and time-shifting (r) calculation as follows: L̂=r(Ψ(L),N_L(Ψ(L))). We now explain the denoising DAAE for the light intensity L. The goal is to estimate the clean data L̃ by an autoencoder Ψ that consists of an encoder Ψ_E and a decoder Ψ_D pair that is trained to minimize a reconstruction loss. We treat this denoising as a distribution alignment task. The encoder Ψ_E takes the noisy light L to generate its latent representation z with a smaller dimension, and the decoder Ψ_D produces the denoised light L̃ from z. We desire to align the latent representations z of noisy original data and z̃ of clean data by establishing a suitable discriminator D to classify the latent vectors into original or denoised (clean) data. The autoencoder Ψ(L) = Ψ_D(Ψ_E(L)) and the discriminator D are trained in an adversarial setting where each tries to outperform the other playing a two-player minimax game as initially introduced in <cit.> using the loss: ℒ_dis=𝔼_z̃[log(D(z̃))] + 𝔼_z[log(1-D(z))]. §.§ Siamese learning-based localization The prior published `template-fit' method <cit.> observed that the slope of light intensity has variations around sunrise and sunset that match at nearby locations on the same day. For example, the sun sets more slowly at high latitude vs. low latitude locations. We extend this idea to pattern matching and apply these slope variations to both light and temperature measurements. The primary assumption of our Siamese learning-based algorithm is that two closely located data on approximately the same day of the year (but in different years) should have highly correlated patterns. We first build a reference library of light intensities from the reference volunteer data with their known positions to perform the pattern matching. However, the constructed reference library is sparsely populated with only the locations where volunteers collected data, making it infeasible to perform pattern matching at arbitrary locations. We mitigate this issue by populating `synthesized' light data along with the longitude coordinate, significantly expanding the available light data for matching. The longitude coordinate mainly affects the time of the night center, and simply time-shifting the night center generates a synthesized light intensity at a new longitude coordinate. Thus, all of the volunteer data are time-shifted to cover our desired longitude range and construct the synthesized light intensity reference library. The amount of time-shift is 4 minutes per one degree of longitude. Note that we cannot apply a similar light intensity shifting to create synthesized data along the latitude coordinate as the light intensity pattern (after the night center alignment) per day depends on the latitude, and we cannot have a reliable model to `synthesize' it. To describe the proposed Siamese learning-based localization, suppose L_t is the light data collected on a known date d_t at an unknown ground-truth 2D (latitude and longitude) location x_t. Let ℒ_ref be a reference library (with volunteer collected and synthesized data). The reference light data subset L_ref contains light data collected from ± 5 days around d_t at known 2D coordinates x_ref, and the size of this subset is denoted by B. Our goal is to quantify the similarity between each entry in L_ref and L_t. Thus, for the ith element in L_ref, we obtained the pre-processed version L̂_ref^i=r(Ψ(L_ref^i),N_L(Ψ(L_ref^i))) after calculating its night center. Then, we generate a pre-processed version of the target light data L̂_t^i=r(Ψ(L_t),N_L(Ψ(L_ref^i))), using the same night center obtained from the reference Ψ(L_ref^i). In this way, when the locations of the L_ref^i and L_t are matched, the estimated night centers align to each other, resulting in more accurate similarity scores. We then use a mapping function to convert the difference between {L̂_ref^i, L̂_t^i} into a distance between their locations {x_ref^i, x_t}. For instance, if L̂_ref^i has a similar pattern as L̂_t^i, then the distance between their locations ||x_ref^i - x_t^i||_2 should be relatively small. We implement a Siamese neural network to construct this mapping function (transformer encoder) ϕ_θ, parameterized by θ. Siamese networks are twin neural networks that share the identical weights <cit.>. The function ϕ_θ maps the pattern representation of the tuple {L̂_ref^i, L̂_t^i} into a similarity score which indicates how close they are located. To construct ϕ_θ, we apply the same convolutional neural network f_θ^L to both L̂_ref^i and L̂_t^i to generate the feature vectors f_θ^L(L̂_t^i) and f_θ^L(L̂_ref^i) in the latent space. Euclidean distance between the latent vectors quantifies the similarity score between L̂_ref^i and L̂_t^i such that ϕ_θ(L̂_ref^i, L̂_t^i)=||f_θ ^L(L̂_ref^i) - f_θ^L(L̂_t^i)||_2. To train the Siamese networks, positive (matching) and negative (non-matching) samples are required. Positive samples are the pairs of closely located data whose distance differences are less than 55km or 0.5^∘ in longitude and latitude, and the negative samples are the pairs of data whose distances are longer than 55km. The model is trained using a contrastive loss function <cit.> as described in Eq. (<ref>). The y value is 1 for the positive samples and 0 for the negative samples, and m is the threshold margin. The contrastive loss aims to maximize the similarity score for the positive samples while minimizing it for the negative samples. ℒ_CL=1-y/2||L̂_ref^i - L̂_t^i||^2_2+y/2{max(0,m - ||L̂_ref^i - L̂_t^i||_2)^2}. The spatial softmax function (<ref>) is then applied to the similarity score between L̂_ref^i and L̂_t^i to convert it to a probability representation in the range of (0, 1). This represents the probability that L_t is obtained at the location x_ref^i. p(L_t |x_ref^i)=exp(-ϕ ^2 (L̂_ref^i, L̂_t^i)/2 σ ^2) / ∑_j=1^B exp(-ϕ ^2 (L̂_ref^j, L̂_t^j)/2 σ ^2). In (<ref>), σ is the standard deviation of the similarity scores for positive samples. The estimated probability (<ref>) is evaluated with all {L̂_ref^i, L̂_t^i} for i={1,⋯,B} to create a set I_ref containing the probabilities for given reference locations: [b] I_ref={(p(L_t |x_ref^1) ,x_ref^1),⋯, (p(L_t |x_ref^B), x_ref^B)}. After generating I_ref, the position x_t of L_t can be estimated by performing a coarse-to-fine grid search. A coarse search grid G has [27^∘:1^∘:48^∘] latitude coordinate grids, and [-122^∘:1^∘:-66^∘] longitude coordinate grids. This search range covers our study area of the U.S, Canada, and Mexico. Each point on G has a 2D coordinate x, and we compute p(L_t|x) for all points on G. To estimate p(L_t|x), we use the reference positions in I_ref which are located around the position x. Thus, a subset of probabilities, called I_ref^cell, from I_ref is created such that all points in I_ref^cell satisfy |x_ref-x|<1^∘. Finally, p(L_t|x) is calculated by taking an average of the probabilities in the subset I_ref^cell. Although the reference library is densely populated along the longitude coordinate (due to the data `synthesis' procedure with time-shifting), the population along the latitude may be coarse for some regions where fewer volunteers were available. Thus, it is possible that with the constraint of |x_ref-x|<1^∘, some grid cells centered at x may have an empty I_ref^cell set. For these empty cells, p(L_t|x) is estimated based on the probabilities from nearest neighboring cells using linear interpolation. The spatial resolution of the likelihood estimation for the grid G is refined by upsampling and linear interpolating the heatmap of p(L_t|x) with a 0.1^∘ resolution of x on the refined G̃. The localization given L_t (light-only localization) is completed through the maximum likelihood estimation on the refined G̃ such that x_t≈_G̃ p(L_t|x). The proposed method for the light data-based likelihood probability p(L_t|x) estimation was extended to the temperature likelihood probability p(T_t |x) estimation in a straightforward manner. Unlike the light data that is only available at sparse volunteer locations, the reference temperature data is available at all weather station locations which are densely distributed. We construct the temperature reference library 𝒯_ref by accessing the weather station data through WeatherBit API <cit.>. The pre-processed reference (weather station) data T̂_ref^i and sensor temperature data T̂_t^i on date d_t are obtained by time-shifting both data using the night-center calculated by the reference temperature data, such that T̂_ref^i=r(T_ref^i,N_T(d_t, x_ref^i)) and T̂_t^i=r(T_t,N_T(d_t, x_ref^i)) hold. Note that denoising is not used for temperature data. A Siamese neural network f_θ^T is trained for temperature matching, then computes a similarity score between the pre-processed weather station temperature data T̂_ref^i ∈𝒯_ref and the sensor data T̂_t^i. This similarity score is then converted to a probability p(T_t |x_ref^i) using Eq. (<ref>) by replacing L with T. The remaining steps to generate p(T_t |x) are identical to the p(L_t |x) estimation. The final likelihood probability that jointly considers light and temperature data is approximated by the product: p(L_t, T_t|x) ≈ p(L_t|x)p(T_t|x) based on the simplifying assumption of conditional independence between the light and temperature measurements. § EXPERIMENTS §.§ Data collection We use real-world data collected through a data measurement campaign with 306 volunteers across the U.S., Canada, and Mexico from 2018 to 2020 <cit.>. Volunteers recorded light and temperature data using HOBO <cit.> sensors to emulate the mSAIL platform <cit.> (Fig. <ref>, top left) from September to early December. The collected dataset contains 5658 daily records with a time resolution of 10 sec for light and 15 sec for temperature. We use the year 2018 – 2019 volunteer data as the training dataset (size of 3834) and the year 2020 data as the testing set (size of 1824). For temperature data, we use the weather station data with a time resolution of 1 hour accessed through WeatherBit API <cit.>. Although the weather station data is much more densely populated than the volunteer data, it does not necessarily cover the entire study area. When the temperature data is not available for a particular location x, we apply the Kriging spatial interpolation <cit.> to the nearby weather station data to obtain the temperature at that location. §.§ Network structure The Siamese network f_θ^L for light data consists of 4 convolutional (conv) layers containing convolution, batch normalization, ReLU, and max pooling, followed by 2 fully connected layers (FCLs). A dropout layer of p=0.30 is applied after the first FCL. The size of the first conv layer is 128×1×9 and the sizes of the other conv layers are 128×1×5. The Siamese network for temperature data f_θ^T consists of 2 conv layers with the size of 32×1×3, and 2 FCLs. A dropout layer of p=0.30 is used after the first FCL. The denoising encoder Ψ_E (and decoder Ψ_D) consists of 3 FCLs of size 480×200 (50×100), 200×100 (100×200), and 100×50 (200×480), each followed by ReLU. The discriminator contains 3 FCLs of size 50×500, 500×500, and 500×1, followed by Sigmoid. We use an ADAM optimizer with a learning rate of 0.001 and a StepLR scheduler with a step size of 1000 to train the models. At each epoch of the training for light data, we choose a batch size of two light records, whose date difference is less than 5 days (but across different years). The temperature model is trained in a similar way, except that one input to the Siamese network is from the weather station. §.§ Localization Results We evaluate our test data localization on a 2D grid covering southern Canada, the U.S., and Mexico for Monarch butterfly localization. Fig. <ref> provides the performance comparison between our proposed method and the state-of-the-art <cit.> using the mean absolute error (MSE) evaluated biweekly. All results in this figure are based on the volunteer data (2018 - 2019 data for training and 2020 data for testing). The CDF of error in longitude and latitude degree is shown in Fig. <ref> middle. Our algorithm outperforms the baseline <cit.>, especially around the equinox (Sep. 22), proving that the pattern matching technique can significantly compensate for the low day length variation issue around the equinox. Fig. <ref> shows that using the temperature data is also critical to compensate for the latitude estimation error from the light-only method around the equinox day when the night length is the same everywhere. Fig. <ref> (right subplot) visualizes the estimated probabilities p(L_t|x), p(T_t|x), and p(L_t|x)p(T_t|x) ≈ p(L_t, T_t|x). We also evaluate our method using the data collected by an mSAIL <cit.> sensor attached to a wild Monarch butterfly on September 17, 2021 in Leamington, Ontario, near Lake Erie. The Monarch was released into the wild for a day, and then its sensor data was retrieved wirelessly while the butterfly was resting on a tree before flying across Lake Erie. Fig. <ref> shows the localization result using the obtained data (the first hour of the data was extrapolated because the mSAIL sensor did not record it). The measured localization accuracy error is 0.032 and 0.1 degrees in latitude and longitude. §.§ Bias Evaluation For a reliable unbiased localization, the error should not strongly depend on the number of available reference data near the ground-truth position. Learning such an unbiased method can be challenging as the training dataset is not uniformly populated and depends on the volunteer locations. To evaluate the robustness and bias of the localization model, we first quantify how densely the reference data is populated around a given ground-truth position by computing the average euclidean distance between the neighboring reference points and the ground-truth position. This is named the `Isolation score'. Then, we evaluate the correlation between the localization error and the computed Isolation score. Ideally, the error should be independent of the Isolation score, and the accuracy around isolated points should be as reliable as in more densely populated areas. We use three formulas, Pearson Correlation coefficient <cit.>, Distance Correlation <cit.>, and Mutual Information <cit.> to measure the correlation between the error and Isolation score. Low numbers in these measures indicate that the method has less correlation and results have less bias towards low Isolation score locations. Table <ref> compares the measured correlation (or bias) between our model and the prior work <cit.>. Our model has significantly less bias in all measures while producing lower errors (Fig. <ref>). Low correlation/bias scores from our model imply that the localization error depends less on the density of reference (training/testing) data around the ground-truth location. § CONCLUSION We developed a Siamese learning-based localization using light intensity and temperature measurements for miniaturized data loggers to study Monarch butterfly migration. The proposed method significantly outperforms the prior method, especially around the equinox. Moreover, it has less bias towards more densely populated areas and achieves lower accuracy errors. The proposed algorithm demonstrates the successful localization of a wild butterfly using real-world data. The presented model exhibits a mean absolute error of 1.416^∘ in latitude and 0.393^∘ in longitude for the volunteer collected test dataset. §.§.§ ACKNOWLEDGMENTS This work was in part funded by NSF IIBR Award #2045017 and National Geographic Society Grant. We thank all the volunteers who participated in the data measurement campaign for this research. rand § SUPPLEMENTAL MATERIAL Fig. <ref> displays the slope of light intensity variations around the sunset and sunrise around the equinox day. The light measurements are located at the same longitude coordinate with different latitude coordinates. The night length (time duration when the log of light intensity is below 0) is equal regardless of the latitude. It can be observed that the light records with close latitude coordinates (red and green light data) have matching slopes and patterns before the sunset and after the sunrise, whereas the slopes are different when latitude coordinates are mismatched. Our algorithm exploits this similarity/difference to improve the localization performance near equinox days.
http://arxiv.org/abs/2307.02240v2
20230705122947
Inert shell coating for enhanced laser refrigeration of nanoparticles: application in levitated optomechanics
[ "Cyril Laplane", "Peng Ren", "Reece P. Roberts", "Yiqing Lu", "Thomas Volz" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall", "quant-ph" ]
[email protected] Sydney Quantum Academy, Sydney, NSW 2006, Australia School of Mathematical and Physical Sciences, Macquarie University, NSW 2109, Australia ARC Centre of Excellence for Engineered Quantum Systems, Macquarie University, NSW 2109, Australia School of Engineering, Macquarie University, NSW 2109, Australia ARC Centre for Nanoscale BioPhotonics, Macquarie University, NSW 2109, Australia School of Mathematical and Physical Sciences, Macquarie University, NSW 2109, Australia ARC Centre of Excellence for Engineered Quantum Systems, Macquarie University, NSW 2109, Australia School of Engineering, Macquarie University, NSW 2109, Australia ARC Centre for Nanoscale BioPhotonics, Macquarie University, NSW 2109, Australia School of Mathematical and Physical Sciences, Macquarie University, NSW 2109, Australia ARC Centre of Excellence for Engineered Quantum Systems, Macquarie University, NSW 2109, Australia We report on a study exploring the design of nanoparticles that can enhance their laser refrigeration efficiency for applications in levitated optomechanics. In particular, we developed lanthanide-doped nanocrystals with an inert shell coating and compared their performance with bare nanocrystals. While optically levitated, we studied the refrigeration of both types of nanoparticles while varying the pressure. We found that the core-shell design shows an improvement in the minimum final temperature: a fourth of the core-shell nanoparticles showed a significant cooling compared to almost none of the bare nanoparticles. Furthermore, we measured a core-shell nanoparticle cooling down to a temperature of 147 K at 26 mbar in the underdamped regime. Our study is a first step towards engineering nanoparticles that are suitable for achieving absolute (centre-of-mass and internal temperature) cooling in levitation, opening new avenues for force sensing and the realization of macroscopic quantum superpositions. Inert shell coating for enhanced laser refrigeration of nanoparticles: application in levitated optomechanics Thomas Volz August 1, 2023 ============================================================================================================== § INTRODUCTION The field of levitated optomechanics - levitodynamics - presents a new paradigm for optomechanics with the promise of ultrahigh Q without the need for a resonator. The levitation of mesoscopic particles has recently established itself as an exquisite platform for force sensing with the potential of quantum advantage <cit.>. These archetypical isolated mechanical oscillators have very low coupling to the environment, with their linewidth essentially limited by the quality of the vacuum <cit.> and they have recently entered the quantum realm <cit.>. The current state-of-the-art still presents limited coherence times which remains one of the limiting factors for the ultimate goal of matter-wave interferometry with these massive levitated particles. Ultimately one of the mechanisms responsible for decoherence is the internal temperature of the levitated particle which can reach several thousands of Kelvin even for small motional temperatures <cit.>. In this regime, one source of decoherence comes from blackbody radiation <cit.>, limiting experiments in the quantum regime without any internal cooling mechanisms <cit.>. It is also interesting to note that even for a moderate vacuum, in the regime where damping mainly comes from collisions with the surrounding gas, having a colder levitated oscillator can in principle increase its Q factor since gas viscosity will increase with temperature <cit.>. A higher Q at a moderate vacuum should in principle help to cool down the centre-of-mass (COM) motion. The internal temperature of an optically levitated object depends on its absorption cross-section, the laser wavelength and irradiance. This is one of the reasons why to date most successful experiments have used SiO_2 nanoparticles as this material exhibits low absorption cross-section at typical trapping wavelength (1064nm and 1550nm). Furthermore, the synthesis of SiO_2 nanoparticles is a mature technology with high yield and uniformity, which make them a prime choice to achieve repeatability in experiments. Levitodynamics experiments with optically active (i.e. absorbing) nanoparticles have been limited to moderate vacuum pressure P > 1 mbar <cit.>. Controlling the internal temperature of a levitated object in vacuum is a challenging task and requires contactless cooling using electromagnetic radiation. Rare-earth ion doped crystals are one of the few materials enabling laser refrigeration of solids through anti-Stokes fluorescence <cit.>, where the energy of the emitted radiation is greater than the energy of the absorbed light. A record low temperature of 91 K <cit.> obtained with a 10% Yb^3+:LiYF_4 crystal was only possible thanks to careful material engineering, i.e. finding the best crystalline host as well as growing an ultrahigh-quality crystal. The laser refrigeration of optically trapped mesoscopic crystals in liquids (water and D_2O) have been demonstrated using 10% Yb^3+:LiYF_4 <cit.> and β-10% Yb^3+:NaYF_4 nanowires <cit.> with refrigeration of ΔT ≈ -15K, -9K and -6K. In 2017, A. T. M. Anishur Rahman and P. F. Barker reported laser refrigeration in optical levitation with a record low temperature of 130 K and an average internal temperature of 167 K <cit.>. It is important to note that in this work, a top-down approach was used to produce the nanocrystals (NC). The sample was obtained by milling down a bulk crystal which resulted in a wide variety of shapes and cooling efficiency. Finally in 2021, α-10% Yb^3+:NaYF_4 synthesized levitated nanocrystals, grown with an hydrothermal process, have been cool down to an average temperature of 252 K (lowest 241 K) <cit.>. It remains paramount to develop synthesis techniques that will yield high uniformity and quality of nanoparticles. Lanthanides doped in Sodium Yttrium fluoride matrices have been extensively studied for upconversion imaging <cit.>. For this application, nanomaterials requirements such as high brightness and low phonon energies are similar to those for anti-Stokes cooling. It thus presents an interesting avenue to explore the same nanoengineering techniques to improve the performance and quality of our nanocryostats. We here investigate the potential of Yb^3+:NaYF_4 nanocrystals with an inert outer shell. This shell reduces non-radiative losses at the surface which will help to maximise the quantum yield and hence the cooling efficiency. We performed fluorescence and oscillator spectroscopy of particles with and without an inert shell and we used a simple thermodynamic model to simulate the internal temperature of the levitated nanoparticles. Thanks to fluorescence thermometry at different pressure we can then assess the cooling properties of the levitated particle. Although of particular interest to the field of levitodynamics, this work also shows the potential of a bottom-up nanoengineering approach for optimising optical cryocooling materials. Ultimately it allows for more versatility in the study of material, crystal phase, morphology, design and dopant concentration compared to a bulk crystal approach that has stringent requirements on the purity/quality of the crystal. § METHODS §.§ Experimental setup We optically trap β-phase 10%Yb^3+:NaYF_4 and 10%Yb^3+:NaYF_4@NaYF_4 (with an inert shell) nanocrystals in our experimental setup shown in Fig.<ref>a. Our trapping and cooling laser is tuned to 1020 nm where the cooling efficiency should be maximized <cit.>. To create the optical tweezer, we have designed a gold-coated parabolic mirror (as described in <cit.>) with an effective numerical aperture (NA) close to 0.99, as confirmed by the oscillator spectroscopy (see Fig. <ref>b). The parabolic mirror trap presents several advantages particularly suited for this study: first the reflection coefficient of gold is relatively constant for the range of wavelength of interest, second the mirror presents virtually no chromatic aberrations compared to microscope objectives. This is particularly suited for multi-wavelength addressing of the levitated nanoparticle. The laser power was set to ∼ 77 mW which gives an excitation irradiance of 22.8 MW/cm^2 well above saturation. The fluorescence from the nanocrystal is filtered through a 1000nm dichroic mirror and a 1000nm shortpass filter before coupled into a single-mode fibre connected to the spectrometer. The light scattered by the levitated nanocrystal is collimated by the parabolic mirror and collected through a multimode fibre which is sent to a photodiode. By adjusting the coupling in the fibre we can collect part of the unscattered laser light, hence we monitor the motion of the nanoparticle through a homodyne detection scheme <cit.> The nanoparticles are first diluted in ethanol and sonicated for 90 min before being loaded into the trap at ambient pressure using an ultrasonic nebulizer. We then monitor the temperature of the nanoparticles while decreasing the pressure in the vacuum chamber. At pressure P ≲ 20 mbar, the levitated oscillator enters the underdamped regime and we can perform spectroscopy (see Fig.<ref>b) by varying the ellipticity of the polarization of the trapping laser using a quarter-wave plate. We can approximate the NC as discs of the same diameter and thickness and by adapting an optical tweezer computational toolbox <cit.>, we can simulate the dynamics of the NC in our trap while varying the polarization (see white crosses in Fig. <ref>b). This allows us to confirm the properties of the parabolic mirror such as its NA as well as understanding the shape, size <cit.> and orientation of the NCs in the trap. We take care that for the fluorescence spectroscopy, the polarization is always oriented to a particular optical axis of the nanocrystal thus ensuring a fixed absorption coefficient <cit.>. In the current article, we limit our study to trapping (and cooling) using linearly polarized light. This means that we prevent the libration and/or rotation of the nanocrystal <cit.> in the trap which could affect its gas thermalization rate <cit.>. §.§ Thermometry From the photoluminescence (PL) spectra we choose to evaluate the temperature of the NC through ratiometric thermometry <cit.>. The ratio of PL intensities at two different wavelengths depends on the difference in populations between the energy levels involved. We can thus infer the temperature if we assume that the spectral irradiance for particular emission bands (purple and yellow in Fig.<ref>) is given by a Boltzmann distribution. Looking at the intensity ratio of two different transitions (ab and cd) in the photoluminescence spectrum and assuming thermal equilibrium we have: R = I_ab/I_cd = Ae^-E_ab-E_cd/k_BT By measuring this ratio at a known temperature, we can calibrate the temperature measurement. In our experiment, we assume that the particle is at equilibrium with the gas temperature at ambient pressure, a condition which is confirmed by our simulations (see Results). Hence if R_1bar is the measured ratio at T_1bar, the temperature T at a pressure P<1 bar is given by: 1/T = 1/T_1bar + ln(R_1bar/R_P)k_B/Δ E_ac We identify the two different transitions we use as thermometers by looking at the PL spectra of the NP in a cryostat at 6.5K (see Appendice Fig.<ref>). We verify the model in Eq.<ref> and our choice of transitions by measuring the ratio of the Purple and Yellow transition at different temperatures in our confocal cryostat setup. The results are plotted in Fig.<ref>b. We note that in our samples the mean fluorescence wavelength seems to be a poor meter for the temperature in contrast with what has been previously observed <cit.>. § RESULTS §.§ Thermodynamics of a levitated nanocryostat In this study, we are interested in evaluating the cooling performance of two variants of 10% Yb^3+:NaYF_4 nanocrystals, with and without a 5-nm inert shell (see Fig. <ref>). The intrinsic high quantum yield η_e ≈ 0.99 of Yb^3+ ions makes them a candidate of choice for cooling purposes, as most of the absorbed laser radiation is re-emitted through anti-Stokes fluorescence. Non-radiative energy losses and parasitic absorption in the host crystal can however reduce or even quench the cooling efficiency. In upconversion nanocrystals (and in nanoparticles in general), non-radiative energy losses are mainly mediated through inter-ion energy transfer and finally loss at the surface. By growing an inert shell around the nanocrystal <cit.> one can quench these processes thereby maximising the external quantum yield and thus the cooling power of the nanocryostat. We monitor the internal temperature of the levitated nanocrystals when decreasing the pressure in the vacuum chamber. At ambient pressure, the nanoparticle thermalizes mostly through gas collisions while under moderate vacuum conditions (P ≤ 100 mbar) competition between laser absorption and fluorescence will define the final internal temperature. The net cooling power in this regime can be expressed as P_cool=Q̇^laser_heat+ Q̇^fluo_cool where heating through absorption of the pump laser (and trapping laser if they are not the same) is given by: Q̇^laser_heat = (α + α_b)I with α denoting the absorption coefficient for the Yb^3+ ions and α_b the background absorption coefficient of the host nanocrystal. Using a four-level model <cit.> with a temperature-independant absorption, cooling through anti-Stokes fluorescence can be described by: Q̇^fluo_cool = -η_e ω_f/ω_pα I where η_e is the external quantum yield, ω_f the mean fluorescence angular frequency and ω_p the pump laser. For a more detailed modelling, one would need to take into account the temperature dependence of both α and ω_f. Finally considering the thermalization with surrounding gas (in the Knudsen regime, Kn ≥ 10), we can write: Q̇^gas_heat/cool = - a_acc√(2/3π)π r^2 v_thγ_sh-1/γ_sh+1(T_int/T_gas-1)P_gas where a_acc is the thermal accommodation coefficient which denotes the fraction of gas molecules that thermalize with the nanoparticle temperature. r is the radius of the particle, v_th is the mean thermal velocity of impinging gas molecules and γ_sh=7/5 is the specific heat ratio of a diatomic gas. For pressure above 100 mbar (Kn ≤ 10 with Kn ≈ 1 at ambient pressure), the mean free path λ_mfp of gas molecules shortens and we can model the thermal exchange rate between the particle and the surrounding gas using <cit.>: Q̇^gas_heat/cool = -8π r^2 k_g/2r+λ_mfpG(T_int-T_gas) The factor G is given by (18γ_sh-10)/a_acc(γ_sh+1). We note that both models (Knudsen regime and intermediary) give very close results for the range of parameters explored here. In the present work, we are in the regime (P_gas > 1 mbar) where blackbody radiation contribution is negligible, so the heat equation for the system is simply given by: mc_vdT_int/dt=Q̇^gas_hc+Q̇^laser_h+Q̇^fluo_c We can estimate the steady-state internal temperature of the levitated nanocrystal at different pressures assuming thermal equilibrium. By measuring the temperature for varying pressure and/or laser power we can then estimate the cooling efficiency of the nanocrystals. The results are compiled in Fig.<ref>. One can assess the heating/cooling rate by looking at the change in temperature when varying the optical power for pressure below 500 mbar, as the thermal exchange with the gas reduces. The thermodynamic performance of the nanocrystals becomes apparent while reducing the pressure, when competition between absorption and fluorescence becomes the dominant thermalization mechanism. From the data, it appears that we cannot bring all the nanocrystals to lower pressures. It is expected that at the range of pressures explored here, a change in the NP temperature can induce strong photothermal forces that often lead to the ejection of the particle from the trap. We found that 6 out of 22 core-shell (CS) NPs show cooling for only 2 (out of 16) for the bare nanoparticles. We measure a minimum temperature of 126 ± 2 K for a core-shell design at a pressure of 266 mbar. We also measure a CS nanoparticle cooling down to 148 ± 4 K down to a pressure of 26 mbar, hence for the first time bringing down the internal temperature of a levitated nanoparticle in the underdamp regime, opening up the possibility for the absolute cooling (internal and COM temperature) of a levitated nanoparticle. Despite the nanoparticles being uniform in size/shape and PL, they show a variety of thermodynamic properties that can be reproduced in simulation by slight variations in quantum yield η_e and/or background absorption α_b. These two quantities are related as a stronger background absorption will mean a lower quantum yield. Statistically, the core-shell nanoparticles have higher quantum yield, hence cooling power. It is worth noting that a mere change of 0.05% can mean going from a cryostat to a heater. From this study, we can deduce that solid-state refrigeration seems to require more stringent requirements than upconversion imaging. Indeed both sample of nanoparticles shows similar properties in shape, size and brightness but their cooling properties are clearly not as uniform. The fact that the core-only NP show barely any cooling seem to indicate that their properties were degraded. Furthermore, because we still measure cooling with the core-shell NPs, it seem reasonable to points towards a degradation of the surface of the NPs maybe during the surface modification step. As one can expect a bigger nanocrystal will yield more cooling (or heating) power for a fixed η_e, α_b and so it could be advantageous in the future to work with bigger mesoscopic particle. The single particle thermodynamics presented here also show an interesting new avenue to characterise the quantum yield of levitated nanocryostat. § CONCLUSIONS In conclusion, we have characterized the thermodynamic (i.e. laser refrigeration) properties of a range of 10% Yb^3+:NaYF_4 nanocrystals, with and without an inert shell. Although nanoengineering brighter NP is relatively mature in the field of upconversion imaging, we here show for the first time that the same techniques can be employed to design more efficient nanocryostats. It opens avenues for more improvement: higher doping rate, co-doping with other lanthanide species, alternative designs such as core-multi-shell and even different aspect ratios. Achieving this goal will greatly benefit the levitodynamics community by offering a unique method to control the internal temperature of levitated objects, which will affect their oscillator coherence properties in UHV. Although the context of this work has been done with application in levitated optomechanics, the results and methods can be extended for instance to the field of biology where controlling the temperature of physiological media can add important capability. § ACKNOWLEDGEMENTS C. L. is supported by the Sydney Quantum Academy Postdoctoral Research Fellowship and would like to thank Gabriel Hetet, John G. Bartholomew, Philippe Goldner, George Winstone and generally the LEVINET collaboration network for inspiring and insightful discussions. T. V. and R. P. R. acknowledge support from the Australian Research Council Centre of Excellence for Engineered Quantum Systems (Grant No. CE170100009) and Lockheed Martin. The authors would like to acknowledge help in the preliminary stage of this experiment from Katherine Kinder and Michael Robinson as well as Xianlin Zheng for preparing the first generation nanocrystals samples. § APPENDICES §.§ Thermodynamics Here are the value used for the simulated thermodynamic performance (solid lines in Fig.<ref>): α_b = 0.001α with N_t = 1.47×10^22 cm^3 and σ = 1.80×10^-20 cm^-2 <cit.>, a_acc = 0.05, λ_fluo = 999.6 nm. When the pressure is low enough (usually for P_gas < 10^-6 mbar), the blackbody radiation of both the environment and the particle starts to play a role and its contribution is given by <cit.> : Q̇^bb_h = 24ξ_R(5)/π^2ϵ_0c^3ħ^4α^”_bbk_B^5(T_env^5-T_int^5) §.§ Thermometry We can potentially estimate the temperature by looking at the change in the linewidth of the main optical transition at 973 nm. In the range of temperature from 50 to 300 K, a nearly quadratic (≈ T^1.9±0.1) dependence of the homogeneous linewidth has been commonly reported in a large variety of host and lanthanide ions <cit.>. For this range of temperatures, the broadening is dominated by the homogeneous optical linewidth of the ions. The phonon processes are the dominant causes of the broadening. §.§ Materials synthesis We here describe the synthesis for the two different types of nanocrystals discussed in the article: 10%Yb^3+:NaYF_4 nanocrystals (diameter of 160 nm and thickness of 80 nm and 10%Yb^3+:NaYF_4@NaYF_4 with a 5 nm inert outer shell (170 nm and thickness of 90 nm). Chemical and reagents: Ytterbium (III) chloride hexahydrate (YbCl_3·6H_2O, 99.99%), yttrium (III) chloride hexahydrate (YCl_3·6H_2O, 99.99%), oleic acid (OA, 90%), 1-octadecene (ODE, 90%), sodium oleate (≥82, fatty acids), ammonium fluoride (NH_4F, ≥98%), and sodium hydroxide (NaOH, ≥97%, pellets) were purchased from Sigma-Aldrich and used as received without further purification. Synthesis of core nanoparticles: The growth of core nanoparticles was precisely controlled by a purpose-built automated growth system <cit.> To synthesize β-NaY_0.9Yb_0.1F_4 core particles, YCl_3 and YbCl_3(total 1 mmol) powder were dissolved in methanol and mixed in a three-neck flask with 6 mL OA and 15 mL ODE. The mixture was stirred and heated to 75 °C for 30 min and then 160 °C for another 30 min. After cooling back to room temperature, 2.5 mmol NaOH and 3 mmol NH_4F were dissolved in methanol and added to the flask. The solution was kept at 75 °C and 160 °C for 30 min, respectively. Then the temperature was increased to 310 °C for 90 min. All the reaction was carried out under Argon gas flow. After the solution was cooled down to room temperature, the nanoparticles were washed and precipitated with ethanol, collected by centrifugation (6000 rpm for 6 min), and dispersed in cyclohexane. Synthesis of the inert shell precursor: The synthesis procedure of shell precursor (NaYF_4) was similar to the core nanoparticles except using YCl_3 instead of YbCl_3. Moreover, the reaction was stopped before the solution was heated to 310 °C. Synthesis of the core-shell nanoparticles: Based on the diameter of the core and the intended shell thickness, 1.5 mL NaYF_4 precursor and 70 mg core nanoparticles were mixed in a three-neck flask with 6 mL OA and 15 mL ODE. The mixture was stirred and heated to 75 °C for 30 min and then to 310 °C for 40 min. The process was under the protection of Argon gas flow. After the solution was cooled down to room temperature, core-shell nanoparticles were washed and dispersed by the same method as the core particles. Modification of the surface for dilution: We need to dilute the nanoparticle solution in ethanol before nebulization into the trap so we modify the surface from hydrophobic to hydrophilic. 2 mg UCNPs dissolved in 1 mL cyclohexane were mixed with 9 mL pH 4 hydrochloride acid solution. The mixture was vortexed for 2 h. The hydrophilic nanoparticles were collected by centrifugation (8000 rpm for 15 min) and dispersed in milli-Q water.
http://arxiv.org/abs/2307.02864v1
20230706090420
Critical behavior of Anderson transitions in higher dimensional Bogoliubov-de Gennes symmetry classes
[ "Tong Wang", "Zhiming Pan", "Keith Slevin", "Tomi Ohtsuki" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn" ]
[email protected] International Center for Quantum Materials, School of Physics, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China [email protected] Institute for Theoretical Sciences, Westlake University, Hangzhou 310024, Zhejiang, China Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan Physics Division, Sophia University, Chiyoda-ku, Tokyo 102-8554, Japan Disorder is ubiquitous in solid-state systems, and its crucial influence on transport properties was revealed by the discovery of Anderson localization. Generally speaking, all bulk states will be exponentially localized in the strong disorder limit, but whether an Anderson transition takes place depends on the dimension and symmetries of the system. The scaling theory and symmetry classes are at the heart of the study of the Anderson transition, and the critical exponent ν characterizing the power-law divergence of localization length is of particular interest. In contrast with the well-established lower critical dimension d_l=2 of the Anderson transition, the upper critical dimension d_u, above which the disordered system can be described by mean-field theory, remains uncertain, and precise numerical evaluations of the critical exponent in higher dimensions are needed. In this study, we apply Borel-Padé resummation method to the known perturbative results of the non-linear sigma model (NLσM) to estimate the critical exponents of the Boguliubov-de Gennes (BdG) classes. We also report numerical simulations of class DIII in 3D, and classes C and CI in 4D, and compare the results of the resummation method with these and previously published work. Our results may be experimentally tested in realizations of quantum kicked rotor models in atomic-optic systems, where the critical behavior of dynamical localization in higher dimensions can be measured. Critical behavior of Anderson transitions in higher dimensional Bogoliubov-de Gennes symmetry classes Tomi Ohtsuki August 1, 2023 ===================================================================================================== § INTRODUCTION Since the discovery of Anderson localization <cit.>, the effects of disorder in various media have been a constant focus of the physics community. The disorder-driven Anderson transition (AT) is a second-order quantum phase transition, around which physical observables show universal power-law behaviors. The universality class of the AT depends on the dimensionality and fundamental symmetries of the system: time-reversal symmetry, particle-hole symmetry, and chiral symmetry <cit.>. Based on these symmetries, Altland and Zirnbauer (AZ) completed the symmetry classification of non-interacting disordered Hamiltonians known as the “10-fold way" <cit.>. The classification is comprised of the three Wigner-Dyson classes (A, AI, and AII), the three chiral classes (AIII, BDI, and CII), and the four Bogoliubov–de Gennes (BdG) classes (D, C, DIII, and CI). The AZ classification is revelatory not only to the study of localization phenomena, but to the study of topological materials <cit.>. The critical exponent ν of the AT characterizes the power-law divergence of the correlation length ξ on approaching the critical point, ξ∼ |x-x_c|^-ν, where x is the tuning parameter and x_c is the critical point. Constrained by computational capacity, relatively few numerical studies have gone beyond three-dimensions (3D) <cit.> into higher dimensions where a stronger strength of disorder is required to drive the system into localization. A strong-disorder renormalization group (RG) approach is in development to provide theoretical insights <cit.>. Recently, the potentials of such efforts are revealed by the proposed superuniversality of ATs in Hermitian and non-Hermitian systems <cit.>, and the mapping between certain disorder-free interacting systems and disordered non-interacting systems with extra dimension <cit.>. Moreover, the theory and numerical simulations are applicable to experimental realizations of quantum kicked rotors with synthetic dimensions <cit.>. While the lower critical dimension d_l=2 of the AT is well established by the one-parameter scaling theory <cit.>, the upper critical dimension d_u, above which a mean-field description is accurate, remains debatable. The self-consistent theory of AT by Vollhardt and Wölfle <cit.> gives the critical exponent of Anderson model (class AI) as ν=1/d-2, 2<d<4, 1/2 , d≥4. The results that d_u=4 and the mean-field critical exponent ν=1/2 are reminiscent of the ϕ^4 theory. A modified version of this theory that considers the renormalization of the diffusion coefficient <cit.> gives ν =1/2 +1/d-2, and d_u=∞. The prediction of the limiting value lim_d→∞ν = 1/2, by both theories agree with the value from the Anderson model on an infinite-dimensional Bethe lattice <cit.>. However, Eq. (<ref>) is in better agreement with numerical results<cit.> of the orthogonal symmetry class for d=3,4,5,6 than Eq.(<ref>). On the other hand, the nonlinear sigma model (NLσM), an effective field theory of Anderson localization, has been studied extensively in d=2+ϵ dimensions <cit.>. The β-function, which describes the renormalization of the conductance with system size, can be calculated analytically using perturbation techniques <cit.>. From the β-function one can derive relevant physical quantities including a series in powers of ϵ for the critical exponent ν. This method, which is referred to as the ϵ-expansion, is rigorous only when ϵ≪ 1. In this limit, the ϵ-expansion gives ν = 1 / ϵ in agreement with Eq. (<ref>) but not Eq. (<ref>) and with numerical simulations on fractals with spectral dimensions close to 2 <cit.>. To obtain results for higher dimensions resummation methods are needed. However, a straightforward resummation<cit.> of the power series for the critical exponent yields ν→ 0 in the limit d →∞ in disagreement with both Eq. (<ref>) and Eq. (<ref>). For the Wigner-Dyson classes ressumations that incorporate the correct asymptotic behaviour of the critical exponent for d →∞ have been performed <cit.> giving better agreement with numerical simulations <cit.> and experimental results <cit.>. However, a comprehensive understanding of the dimensional-dependence of the AT in different symmetry classes is still lacking. In this paper, we focus on the BdG symmetry classes in 3D and 4D. The four BdG classes appear naturally in the topological superconducting system <cit.>. The underlying BdG Hamiltonian H is invariant under the antiunitary transform of particle-hole symmetry (PHS) 𝒞=U_CK, 𝒞: H → -U_C^† H^T U_C, where U_C is a unitary matrix and K denotes the operation of complex conjugation <cit.>. The BdG universality classes are realized at the particle-hole symmetric point, E=0. The particle-hole symmetry can be classified into two kinds, even (𝒞^2=+1) or odd (𝒞^2=-1). The symmetry classes can further be characterized by time-reversal symmetry (TRS) 𝒯. There are four BdG classes: singlet/triplet SC (class D), singlet SC (class C), singlet/triplet SC with TRS (class DIII) and singlet SC with TRS (class CI). Class D and class C describes BdG systems with even or odd PHS and broken TRS. Classes DIII and CI are characterized by a time-reversal operator 𝒯: H→ U_T H^T U_T^-1, where the unitary matrix U_T satisfies U_T^2=± 1. For classes DIII, one has PHS 𝒞^2=+1 and TRS 𝒯^2=-1. For class CI, one has PHS 𝒞^2=-1 and TRS 𝒯^2=+1. The symmetries of the BdG classes are summarized in Table <ref>. Due to the absence of spin-rotation invariance, class D and class DIII exhibit weak antilocalization. Below we apply the resummation method previously employed <cit.> for the Wigner-Dyson symmetry classes to the BdG classes. We also report simulations using the transfer matrix method for class DIII in 3D, and classes C and CI in four dimensions (4D). We compare estimates of the critical exponent ν obtained by finite-size scaling analysis of the numerical simulations with the results of the resummation method. Our results show the ability of this Borel-Padé analysis to give quantitative predictions of critical exponents ν for the BdG classes beyond 2D. The rest of the paper is organized as follows. In Sec. <ref>, we review briefly the Borel-Padé resummation. In Sec. <ref>, we apply the Borel-Padé method to the ϵ-series of the critical exponent ν for the BdG classes. In Sec. <ref>, we apply the Borel-Padé method to the ϵ-series of the β-functions. In Sec. <ref> we report our numerical simulations. In Sec. <ref> we compare the Borel-Padé predictions with numerical results (both those reported here and previously published work). A summary is given in Table <ref>. In Sec. <ref> we discuss and conclude our findings. § BOREL-PADÉ RESUMMATIONS In the scaling theory of Anderson transition <cit.>, the β-function is defined as, β(g) =dln g/dln L, where g is the dimensionless conductance measured in units of e^2/h and summed over the spins, and L is the length of a d-dimensional cubic system. For the NLσM description it is more convenient to work with the inverse conductance t=1/(π g) and β(t) =-dt/dln L = β(g)/π g. The critical point t_c>0 of the AT is a zero-crossing point of β(t) β(t_c) =0, and the critical conductance is given by g_c = 1/(π t_c). The critical exponent ν is related to the derivative of the β-function at the critical point, dβ(t)/dt|_t=t_c =-dβ(g)/dln g|_g=g_c =-1/ν. The β-functions of the BdG classes up to the 4-loop order <cit.> are listed in Table <ref>. Note that the coefficient of t^5 for class C in Table <ref> differs from that given in Table III of Ref. <cit.>. [We use the value -376/3 for the coefficient of t^5 for class C, whereas in Table III of Ref. <cit.> it is -376/48. We believe the latter is a typo and that the coefficient c_3(-2N) should be replaced by 16 c_3(-2N) so that the β-functions of classes D and C satisfy the duality relation β_ Sp(t)=-2β_ O(-t/2) of the underlying NLσM manifolds. The β-function of class D, which corresponds to the NLσM manifold Sp(2N)/U(N), is given in Eq. (3.7) of Ref. <cit.>. We thank Alexander D. Mirlin for private communication.]. We also note in passing that the β-functions of the chiral symmetry classes were found to be strictly zero in all orders in perturbation theory <cit.>. The Borel-Padé resummation method is a technique for dealing with truncated and possibly divergent series. Given an infinite series f f(x) = ∑_k f_k x^k, its Borel sum is defined as f̃(x) = ∑_k f_k/k! x^k. The original series in Eq. (<ref>) can be recovered by calculating the Borel transform f(x) = 1/x∫_0^∞ e^-y/xf̃(y) dy. Suppose the coefficients f_k are known for order k≤ l. We approximate f̃ on the r.h.s by a rational function f̃(x) ≈ r(x) = p(x)/q(x), where p(x), q(x) are polynomials of order m and n, respectively, p(x) = ∑_k=0^m p_k x^k, q(x)=∑_k=0^n q_k x^k, q_0≡ 1. For choices of [m,n] that satisfy m+n=l, the coefficients of the polynomials p and q are uniquely determined. In some cases we require m<n so that the Padé approximant satisfies lim_x→∞ r(x) = 0. Then, the rational function r can be decomposed into a sum of partial fractions r(x) =∑_j=1^n a_j/x-λ_j, where λ_j are the roots of the polynomial q(x). In general, the λ_j and a_j are complex numbers. Substituting the above equation into Eq. (<ref>) and performing the integration, we obtain the Borel-Padé approximation F of the series for f F(x) = 1/x∑_j=1^n a_j B(λ_j/x). Here, the function B is defined by B(s) = -exp(-s) E_i(s) s∈ℝ, s≠ 0, exp(-s) E_1(-s) s∈ℂ, s≠π, where E_i(x) = - ∫_-x^∞e^-t/tdt = ∫_-∞^+xe^t/t dt, E_1(z) = ∫_z^∞e^-t/tdt, | z|<π . § RESUMMATION OF THE SERIES FOR Ν ( Ε ) Series in powers of ϵ for the critical exponent ν can be derived starting from the series for the β-function in powers of t as follows. We take symmetry class C as an example. We first find an approximation for t_c(ϵ) by solving Eq. (<ref>) using the available terms in the power series for β(t). For class C we find t_c(ϵ) = 1/2ϵ - ϵ^2 + 9/4ϵ^3 - 77/12ϵ^4 + 𝒪(ϵ^5). Here we have chosen the root for which lim_ϵ→ 0 t_c = 0. If we then substitute the series for t_c into Eq. (<ref>), we obtain the following series in powers of ϵ for the inverse of ν 1/ν( ϵ) = ϵ +2ϵ^2 - ϵ^3 + 15/2ϵ^4 + 𝒪(ϵ^5). Taking the reciprocal of this series we obtain ν(ϵ) = 1/ϵ -2 + 5 ϵ -39/2ϵ^2 +𝒪(ϵ^3). Similarly, for symmetry class CI we find t_c(ϵ) = 1/2ϵ -1/4ϵ^2 + 1/16ϵ^3 - 1+9ζ(3)/24ϵ^4 + 𝒪(ϵ^5) 1/ν( ϵ) = ϵ +1/2ϵ^2 +1/4ϵ^3 + 5+36ζ(3)/16ϵ^4 + 𝒪(ϵ^5) ν(ϵ) =1/ϵ - 1/2 - 3+36ζ(3)/16ϵ^2 +𝒪(ϵ^3). This approach works for symmetry classes C and CI because the coefficient of the t^2 term in β(t) is negative and the lower critical dimensions for these classes is d_l=2. However, for symmetry classes D and DIII the coefficient of the t^2 term in β(t) is positive, so that when we follow the procedure explained above we find lim_ϵ→ 0 t_c ≠ 0, and we are unable to obtain a useful series in powers of ϵ for ν. This reflects the possibility that the lower critical dimensions for these two classes is below 2D (d_l<2), as thought to be the case for the symplectic class AII. Now we apply the Borel-Padé resummation introduced in the previous section. A naive resummation tacitly assumes the limiting behavior lim_d→∞ν = 0, which disagrees with self-consistent theories of the AT and the results for the AT on the Bethe lattice, i.e., with Eq. (<ref>). Instead, we rewrite ν( ϵ) = 1/2 + 1/ϵ f ( ϵ), and perform the resummation of f(ϵ) with the requirement m ≤ n. Such a treatment guarantees the limiting behavior given in Eq. (<ref>). Of course, the application of this restraint to the BdG symmetry classes needs to be justified. For later reference, in Table <ref>, we compare the results given by imposing Eq. (<ref>) and Eq. (<ref>) for the classes C and CI in 3D and 4D. § RESUMMATION OF THE SERIES FOR Β(T) An alternative to the approach above is to apply the Borel-Padé method directly to the series for the β-function.<cit.>. All the series take the form β(t) = ϵ t - t f(t), where f is a power series in t. In terms of f(t) the critical exponent is 1/ν = tdf(t)/dt|_t=t_c. We need to impose the limiting behaviour at infinite dimension given in Eq.(<ref>). We first note that in high dimensions the Anderson transition takes place at strong disorder and, moreover, that lim_d→∞ t_c = ∞ This means that we can obtain the correct limiting behaviour by arranging that lim_t→∞ tdf/dt = A, with A=2. To do so, we define h, a polynomial in t, by h(t) = tdf(t)/dt - A . Applying the Borel-Padé method to h, we obtain an approximation H for h that satisfies lim_t→∞ H(t) = 0, so that Eq. (<ref>) is satisfied. To obtain the corresponding approximation F for f, a further integration is needed, f(t) ≈ F(t) =∫_0^tA+H(t)/t dt . The result can be expressed in the form<cit.> F(t) = ∑_j=1^n c_j B(λ_j/t), c_j =a_j/λ_j . Finally, the β-function is approximated as β(t) ≈ϵ t - t F(t). We show the resulting Borel-Padé approximations for β(g) in 3D for classes C, CI in Fig. <ref> and Fig. <ref>, respectively, together with the series without resummation. We omit the [m,n]=[1,3] resummation for class C because the resulting β-function is not monotonic and has two unphysical fixed points. The limiting behavior β(g) ∼ 2ln g at g≪ 1 guaranteed by the constraint A=2 in Eq. (<ref>) is observed only at ln g much smaller than the range plotted in Fig. <ref> . We show the resulting Borel-Padé approximations for β(g) in 2D for classes D and DIII in Fig. <ref> and Fig. <ref>, respectively, together with the series without resummation. In classes D and DIII, for d<2, two fixed points appear: a critical fixed point, and a stable fixed point. At the lower critical dimension d_l, these two fixed points annihilate, e.g., the dashed curve in Fig. <ref> and Fig. <ref>, and the value of the β-function at its maximum is zero max_d=d_lβ(g)=0. This leads directly to an estimate for d_l, d_l≈ 2-maxβ(g,ϵ=0). Estimates of the lower critical dimension obtained from the Borel-Padé resummations are summarized in Table <ref>. § NUMERICAL SIMULATIONS To evaluate the effectiveness of the Borel-Padé resummation in estimating the critical exponents of the BdG symmetry classes, especially in high spatial dimensions d≥ 3, we perform simulations for 3D class DIII, 4D class C, and 4D class CI. We set the energy E to the particle-hole symmetric point, E=0, and vary the disorder strength W. §.§ 3D class DIII This symmetry class describes time-reversal symmetric superconductors with broken spin-rotational symmetry. We study a four-band tight-binding model on cubic lattice <cit.>, ℋ_ DIII = ∑_𝐫,𝐫' c_𝐫^† [H_ DIII]_𝐫𝐫' c_𝐫' =∑_𝐫∑_μ=1^3[i t/2 c_𝐫+𝐞_μ^†α_μ c_𝐫-m_2/2 c_𝐫+𝐞_μ^†β c_𝐫+ H.c. ] +∑_𝐫(m_0+3 m_2+v_𝐫) c_𝐫^†β c_𝐫 where c^†_𝐫 (c_𝐫) is the 4-component creation (annihilation) operator on a cubic-lattice site 𝐫. For convenience we set the lattice constant a to be unity. The 𝐞_μ=1,2,3 are the primitive lattice vectors along the x,y,z directions, respectively. The matrices α_μ and β are defined as α_μ = ([ 0 σ_μ; σ_μ 0 ]), β = ([ 1 0; 0 -1 ]), where σ_μ and τ_μ are Pauli matrices acting on different degrees of freedom (e.g., spin and orbital). Parameter m_0 is a mass, and parameters m_2 and t are hopping amplitudes. This Hamiltonian has time-reversal symmetry U_T^† H_ DIII^* U_T = H_ DIII where U_T = δ_𝐫𝐫' (σ_2 ⊗τ_0) , U_T^T=-U_T, and a particle-hole symmetry U_S^† H_ DIII U_S = -H_ DIII where U_S = δ_𝐫𝐫' (τ_0 ⊗τ_2). This model depicts a 3D ℤ topological insulator (TI) when m_0 < 0 and a trivial insulator when m_0>0. For numerical calculations, we specify the parameters t=2, m_2=1, m_0=-2.5, and use independent uniform distributions for the random on-site potential v_𝐫∈ [-W/2,W/2], ⟨ v_𝐫 v_𝐫'⟩ =δ_𝐫𝐫' W^2/12. Here, ⟨⋯⟩ indicates a disorder average. We use the transfer matrix method to calculate the localization length of the model <cit.> and impose periodic boundary conditions in the transverse direction. We simulate a semi-infinite bar with a cross section of size L× L and estimate the quasi-one-dimensional (Q1D) localization length λ at disorder strength W and linear size L. A dimensionless ratio Λ is defined as Λ(W,L) = λ(W,L)/L. The results are shown in Fig. <ref> where Λ is plotted versus W for various L. Curves for different L have an approximate common crossing point. This point indicates the Anderson transition between the TI (localized) phase and the metallic (extended) phase. To estimate the critical exponent, we fit the data to the following scaling form that includes corrections to single parameter scaling due to an irrelevant scaling variable <cit.> Λ = F( ϕ_1, ϕ_2 ) = F( u_1(w) L^1/ν, u_2(w) L^-y), where ω = (W-W_c)/W_c, and ϕ_1=u_1L^1/ν is the relevant scaling variable that encodes the power-law divergence of correlation length ξ∼ |u_1(w)|^-ν around the critical point. The second scaling variable ϕ_2=u_2 L^-y with exponent -y<0 is the leading irrelevant correction, and vanishes in the limit L→∞. We approximate the scaling function F using a truncated Taylor series near the critical point (|w|≪ 1), F( ϕ_1, ϕ_2 ) = ∑_j=0^n_2 F_j(ϕ_1) ϕ_2^j =∑_i=0^n_1∑_j=0^n_2 f_ijϕ_1^i ϕ_2^j, and u_1 = ∑_k=1^m_1 b_k w^k, u_2 = ∑_k=0^m_2 c_k w^k. We set b_1=c_0=1 to remove the arbitrariness of the expansion coefficients. The numerical data are fitted to the scaling function by minimizing the χ-squared statistic χ^2 = ∑_n=1^N_ D(Λ_n - F_n)^2/σ_n^2. Here, N_ D is the number of data points, Λ_n is the value of Λ for nth data point, σ_n its standard error, and F_n the value of the scaling function for the nth data point. To assess whether or not the fit is acceptable, we use the goodness of fit probability. Here, this is well approximated by <cit.> GoF≈ 1 - 1/Γ(N_ F/2)∫_0^χ_ min^2/2dt e^-t t^χ_ min^2/2-1, where N_F = N_D - N_P is the degrees of freedom (with N_P the number of fitting parameters), χ_min^2 is the minimum value of the χ-squared statistic, and Γ is the Gamma function. The fitting results are shown in Table <ref> (a). Our estimate of the critical exponent for 3D Class DIII is ν = 0.96 ± 0.01 §.§ 4D class C Symmetry class C describes disordered superconductors with spin-rotational symmetry but broken time-reversal symmetry. For this symmetry class the spin quantum Hall effect occurs in two-dimensions <cit.>. We extend the 3D tight-binding model for class C of Ref. <cit.> to 4D, ℋ_ C = ∑_𝐫,𝐫' c_𝐫^† [H_ C]_𝐫𝐫' c_𝐫' = ∑_𝐫[ ∑_μ=1^3 t c_𝐫+𝐞_μ^† c_𝐫 + t_∥ c_𝐫+𝐞_4^† c_𝐫. + . it_⊥( c_𝐫+𝐞_1^†σ_1 c_𝐫 + ∑_μ=2,3 c_𝐫+𝐞_μ^†σ_2 c_𝐫) + H.c.] + ∑_𝐫 (v_𝐫+Δ) c_𝐫^†σ_3 c_𝐫. Here, c_𝐫^† is the creation operator on lattice site 𝐫=(x_1,x_2,x_3,x_4) where the two components act on spin, orbital or Nambu space depending on the nature of the system. The Hamiltonian has a particle-hole symmetry U_P^† H_ C^* U_P = - H_ C with U_P = δ_𝐫𝐫' e^iπ∑_μ=1^4 𝐫·𝐞_μσ_2, U_P^T=-U_P. In the clean limit the Fourier transformation of the Hamiltonian is h_ C(k) = 2 t_∥cos k_4 + 2 t ∑_μ=1^3cos k_μ + Δσ_3 - 2 t_⊥[ sin k_1 σ_1 + ( sin k_1 + sin k_2) σ_2 ] . For numerical simulations, we set Δ=0.5, t_⊥=t=1 and t_∥=0.8 so that the clean system has a finite Fermi surface at E_F=0. We calculate the two-terminal Landauer conductance G using the transfer matrix method <cit.>, G = e^2/h g, g=Tr[ t̃^†t̃], where t̃ is the transmission matrix of the hypercubic samples of size L^4 along w axis. We impose periodic boundary conditions in directions the transverse to the current. While the dimensionless conductance g exhibits fluctuations, various disorder average are well described by a scaling function like Eq. (<ref>) <cit.>. We calculate ln⟨ g ⟩, and use the same nonlinear fitting procedures as described through Eq. (<ref>-<ref>). Each data point ⟨ g ⟩ is averaged over 5000–20000 samples to ensure a relative error smaller than 1%. The results for the critical exponent ν and other quantities are shown in Table <ref> (b). The fitting results are stable against change of expansion order m_1,m_2 and the range of system size. Our estimate of the critical exponent for 4D class C is ν = 0.72 ± 0.02. Note that the critical disorder W_c and critical conductance g_c are model-dependent, i.e.not universal. §.§ 4D class CI Symmetry class CI describes disordered superconductors with both time-reversal symmetry and spin-rotational symmetry. Again, we extended the 3D class CI model of Ref. <cit.> to 4D H_ CI = ∑_𝐫,𝐫' c_𝐫^† [H_ CI]_𝐫𝐫' c_𝐫' = ∑_𝐫[ ∑_μ=1^3 t_⊥ c_𝐫+𝐞_μ^† c_𝐫 + t_∥ c_𝐫+𝐞_4^†σ_3 c_𝐫 + t'_∥ c_𝐫+𝐞_4^†σ_1 c_𝐫 + H.c.] + ∑_𝐫 (v_𝐫+Δ) c_𝐫^†σ_1 c_𝐫. The Hamiltonian is time-reversal symmetric since H_ CI^* = H_ CI, and has particle-hole symmetry U_P^† H_ CI^* U_P = - H_ CI given by U_P = δ_𝐫𝐫' e^iπ∑_μ=1^3 𝐫·𝐞_μσ_2, U_P^T=-U_P. In the clean limit the Fourier transformation of the Hamiltonian is h_ CI(k) = 2 t_⊥∑_μ=1^3cos k_μ + 2t_∥^'cos k_4 σ_3 + (Δ + 2 t_∥cos k_4 ) σ_1 . In numerical simulations of the two-terminal Landauer conductance, we chose Δ=1.2, t_⊥=1 and t_∥ = t'_∥ = 0.5. Following the same procedures as described in the previous section, we estimate the critical exponent ν and other quantities. The results are shown in Table <ref> (b). Our estimate of the critical exponent ν for 4D class CI is ν = 0.83 ± 0.04 § COMPARISON OF BOREL-PADÉ PREDICTIONS WITH NUMERICAL RESULTS Referring to Table <ref>, we see that for classes C and CI in both 3D and 4D, the estimates of the critical exponent obtained with the [0,4] Borel-Padé resummations are in good agreement with the numerical estimates. For 3D class D the discrepancy is relatively large and even larger for 3D class DIII. These are also the two symmetry classes where d_l<2 (see Table <ref>). In addition we notice an inconsistency between our estimation of the critical exponent for 3D class DIII ν=0.96± 0.01 and that in Ref. <cit.> ν=0.85± 0.05. The model used in Ref. <cit.> is essentially the same as here, but the data set of Ref. <cit.> is of smaller size and of lower numerical precision. However, we note the possibility that the weak topological indices may change the critical behavior of Anderson transition <cit.>. We have resummed the series for the β-function in such a way that Eq. (<ref>) is satisfied. This resummation means that in the localised regime the β-function will behave like Aln g up to a constant. It would then seem more natural to set A=1 rather than A=2. However, the former choice does not yield the correct limiting behavior Eq. (<ref>). For reference, we also tabulate the estimates of the critical exponents calculated from the truncated β-function series without resummation and from the Borel-Padé analysis with A=1 in Table <ref>. Without resummation, we obtain estimates that violate the Chayes inequality ν≥ 2/d <cit.>. With A=1, the estimates satisfy the Chayes inequality but are in poorer agreement with the numerical estimates compared with A=2. § SUMMARY AND DISCUSSION In this paper, we have studied the Anderson transition in the BdG symmetry classes both analytically and numerically. We applied the Borel-Padé resummation method to the known perturbative results for the NLσM to estimate the critical exponents in 3D and 4D. We also reported numerical simulations of class DIII in 3D, and classes C and CI in 4D, and compared the results of the resummation method with the results of the resummations and previously published work. We find that the results of the Borel–Padé analysis provide estimates of the critical exponent with the numerical estimates provided the limiting behaviour Eq. (<ref>) is imposed during the resummation. In principal, the NLσM theory of Anderson localization and its renormalization analysis in d=2+ϵ dimensions are valid only when ϵ is small, i.e., the Anderson transition occurs under weak disorder. Nonetheless, our results show that the perturbative β-functions can provide useful information concerning critical properties in 3D and 4D. The estimations of the critical exponents in BdG symmetry classes based on the Borel-Padé resummation methods with the assumption of infinite upper critical dimension match the numerical results better. This suggest that the upper critical critical dimension d_u may be infinite for the Anderson localization in BdG symmetry classes. Previous theoretical works have argued that in noncompact NLσM, the upper critical dimension is infinite<cit.>, which seems to be consistent with the numerical results and estimation of Borel-Padé resummation method in this work. Further theoretical efforts are needed to conform these observations. Recently, it has been pointed out that the NLσM model characterizes the measurement-induced phase transition in quantum circuits<cit.>. This scenario involves a replica number N equal to 1. The resummation method discussed in this paper is also applicable to that case, allowing for the prediction of critical exponents in quantum circuit systems. Acknowledgments We thank and Ryuichi Shindou, Ferdinand Evers and Alexander D. Mirlin for fruitful discussions. T.W. was supported by the National Basic Research Programs of China (Grant No. 2019YFA0308401) and the National Natural Science Foundation of China (Grants No. 11674011 and No. 12074008). Z.P. was supported by National Natural Science Foundation of China (No. 12147104). T.O. and K.S. were supported by JSPS KAKENHI Grants 19H00658, and T.O. was supported by JSPS KAKENHI 22H05114.
http://arxiv.org/abs/2307.02098v1
20230705081923
JWST detection of heavy neutron capture elements in a compact object merger
[ "A. Levan", "B. P. Gompertz", "O. S. Salafia", "M. Bulla", "E. Burns", "K. Hotokezaka", "L. Izzo", "G. P. Lamb", "D. B. Malesani", "S. R. Oates", "M. E. Ravasio", "A. Rouco Escorial", "B. Schneider", "N. Sarin", "S. Schulze", "N. R. Tanvir", "K. Ackley", "G. Anderson", "G. B. Brammer", "L. Christensen", "V. S. Dhillon", "P. A. Evans", "M. Fausnaugh", "W. -F. Fong", "A. S. Fruchter", "C. Fryer", "J. P. U. Fynbo", "N. Gaspari", "K. E. Heintz", "J. Hjorth", "J. A. Kennea", "M. R. Kennedy", "T. Laskar", "G. Leloudas", "I. Mandel", "A. Martin-Carrillo", "B. D. Metzger", "M. Nicholl", "A. Nugent", "J. T. Palmerio", "G. Pugliese", "J. Rastinejad", "L. Rhodes", "A. Rossi", "S. J. Smartt", "H. F. Stevance", "A. Tohuvavohu", "A. van der Horst", "S. D. Vergani", "D. Watson", "T. Barclay", "K. Bhirombhakdi", "E. Breedt", "A. A. Breeveld", "A. J. Brown", "S. Campana", "A. A. Chrimes", "P. D'Avanzo", "V. D'Elia", "M. De Pasquale", "M. J. Dyer", "D. K. Galloway", "J. A. Garbutt", "M. J. Green", "D. H. Hartmann", "P. Jakobsson", "P. Kerry", "D. Langeroodi", "J. K. Leung", "S. P. Littlefair", "J. Munday", "P. O'Brien", "S. G. Parsons", "I. Pelisoli", "A. Saccardi", "D. I. Sahman", "R. Salvaterra", "B. Sbarufatti", "D. Steeghs", "G. Tagliaferri", "C. C. Thöne", "A. de Ugarte Postigo", "D. A. Kann" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO" ]
]JWST detection of heavy neutron capture elements in a compact object merger [1,2]Andrew [email protected] 3]Benjamin P. Gompertz 4,5]Om Sharan Salafia 6,7,8]Mattia Bulla 9]Eric Burns 10,11]Kenta Hotokezaka 12,13]Luca Izzo 14,15]Gavin P. Lamb 1,16,17]Daniele B. Malesani 3]Samantha R. Oates 1,4]Maria Edvige Ravasio 18]Alicia Rouco Escorial 19]Benjamin Schneider 20,21]Nikhil Sarin 21]Steve Schulze 15]Nial R. Tanvir 2]Kendall Ackley 22]Gemma Anderson 16,17]Gabriel B. Brammer 16,17]Lise Christensen 23,24]Vikram S. Dhillon 15]Phil A. Evans 19,25]Michael Fausnaugh 26,27]Wen-fai Fong 28]Andrew S. Fruchter 29,30,31,32]Chris Fryer 16,17]Johan P. U. Fynbo 1]Nicola Gaspari 16,17]Kasper E. Heintz 12]Jens Hjorth 33]Jamie A. Kennea 34,35]Mark R. Kennedy 1,36]Tanmoy Laskar 37]Giorgos Leloudas 38,39]Ilya Mandel 40]Antonio Martin-Carrillo 41,42]Brian D. Metzger 43]Matt Nicholl 26,27]Anya Nugent 44]Jesse T. Palmerio 45]Giovanna Pugliese 26,27]Jillian Rastinejad 46]Lauren Rhodes 47]Andrea Rossi 43,46]Stephen J. Smartt 46,48]Heloise F. Stevance 49]Aaron Tohuvavohu 32]Alexander van der Horst 44]Susanna D. Vergani 16,17]Darach Watson 50]Thomas Barclay 28]Kornpob Bhirombhakdi 51]Elmé Breedt 52]Alice A. Breeveld 23]Alexander J. Brown 4]Sergio Campana 1]Ashley A. Chrimes 4]Paolo D'Avanzo 53,54]Valerio D'Elia 55]Massimiliano De Pasquale 23]Martin J. Dyer 38,39]Duncan K. Galloway 23]James A. Garbutt 56]Matthew J. Green 57]Dieter H. Hartmann 58]Páll Jakobsson 23]Paul Kerry 12]Danial Langeroodi 59,60,39]James K. Leung 23]Stuart P. Littlefair 2,61]James Munday 15]Paul O'Brien 23]Steven G. Parsons 2]Ingrid Pelisoli 44]Andrea Saccardi 23]David I. Sahman 62]Ruben Salvaterra 4]Boris Sbarufatti 2,39]Danny Steeghs 4]Gianpiero Tagliaferri 63]Christina C. Thöne 64]Antonio de Ugarte Postigo 65]David Alexander Kann [1]Department of Astrophysics/IMAPP, Radboud University, 6525 AJ Nijmegen, The Netherlands [2]Department of Physics, University of Warwick, Coventry, CV4 7AL, UK [3]Institute for Gravitational Wave Astronomy and School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK [4]INAF - Osservatorio Astronomico di Brera, Via E. Bianchi 46, I-23807, Merate (LC), Italy [5]INFN - Sezione di Milano-Bicocca, Piazza della Scienza 2, I-20146, Milano (MI), Italy [6]Department of Physics and Earth Science, University of Ferrara, via Saragat 1, I-44122 Ferrara, Italy [7]INFN, Sezione di Ferrara, via Saragat 1, I-44122 Ferrara, Italy [8]INAF, Osservatorio Astronomico d’Abruzzo, via Mentore Maggini snc, 64100 Teramo, Italy [9]Department of Physics & Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA [10]Research Center for the Early Universe, Graduate School of Science, The University of Tokyo, Bunkyo, Tokyo 113-0033, Japan [11]Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan [12]DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen N, Denmark [13]INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, 80131, Napoli, Italy [14]Astrophysics Research Institute, Liverpool John Moores University, IC2 Liverpool Science Park, Liverpool, L3 5RF, Liverpool, UK [15] School of Physics & Astronomy, University of Leicester, University Road, Leicester, LE1 7RH, UK [16]Cosmic Dawn Center (DAWN), Denmark [17]Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen N, Denmark [18]European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo s/n, 28692 Villanueva de la Cañada, Madrid, Spain [19]Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA [20]Nordita, Stockholm University and KTH Royal Institute of Technology Hannes Alfvéns väg 12, SE-106 91 Stockholm, Sweden [21]The Oskar Klein Centre, Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm, Sweden [22]International Centre for Radio Astronomy Research, Curtin University, GPO Box U1987, Perth, WA 6845, Australia [23]Department of Physics and Astronomy, University of Sheffield, Sheffield, S3 7RH, United Kingdom [24]Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain [25]Department of Physics & Astronomy, Texas Tech University, Lubbock TX, 79410-1051, USA [26]Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, 1800 Sherman Ave., Evanston, 60208, IL, USA [27]Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, 60208-3112, IL, USA [28] Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 [29]Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM 87545 [30]Department of Astronomy, The University of Arizona, Tucson, AZ 85721 [31]Department of Physics and Astronomy, The University of New Mexico, Albuquerque, NM 87131 [32]Department of Physics, The George Washington University, Washington, DC 20052 [33]Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802, USA [34]School of Physics, Kane Building, University College Cork, Cork, Ireland [35]Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, The University of Manchester, M13 9PL, UK [36]Department of Physics & Astronomy, University of Utah, Salt Lake City, UT 84112, USA [37]DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark [38]School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia [39]ARC Center of Excellence for Gravitational Wave Discovery – OzGrav [40]School of Physics and Centre for Space Research, University College Dublin, Belfield, Dublin 4, Ireland [41]Department of Physics and Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA [42]Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA [43]Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK [44]GEPI, Observatoire de Paris, Université PSL, CNRS, 5 Place Jule Janssen, 92190 Meudon, France [45]Astronomical Institute Anton Pannekoek, University of Amsterdam, 1090 GE Amsterdam, The Netherlands [46]Department of Physics, University of Oxford, Keble Road, Oxford, OX1 3RH, UK [47]INAF-Osservatorio di Astrofisica e Scienza dello Spazio, Via Piero Gobetti 93/3, 40129 Bologna, Italy [48]Department of Physics, The University of Auckland, Private Bag 92019, Auckland, New Zealand [49]Department of Astronomy & Astrophysics, University of Toronto, Toronto, ON M5S 3H4 [50]NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA [51]Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK [52]University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, RH5 6NT, UK [53]ASI/SSDC, Via del Politecnico SNC, I-00133, Rome, Italy [54]INAF/OAR, Via Frascati 33, I-00040, Monteporzio Catone, Rome, Italy [55]University of Messina, Polo Papardo, Department of Mathematics, Physics, Informatics and Earth Sciences, via F.S. D'Alcontres 31, 98166 Messina, Italy [56]School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 6997801, Israel [57]Department of Physics and Astronomy & Clemson University, Clemson, SC 29634-0978 [58]Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavik, Iceland [59]Sydney Institute for Astronomy, School of Physics, The University of Sydney, NSW 2006, Australia [60]CSIRO Space and Astronomy, PO Box 76, Epping, NSW 1710, Australia [61]Isaac Newton Group of Telescopes, Apartado de Correos 368, E-38700 Santa Cruz de La Palma, Spain [62]INAF/IASF-MI, Via Alfonso Corti 12, I-20133, Milano, Italy [63]Astronomical Institute of the Czech Academy of Sciences, Fričova 298, 251 65 Ondřejov, Czech Republic [64]Artemis, Observatoire de la Côte d'Azur, Université Côte d'Azur, Boulevard de l'Observatoire, F-06304 Nice, France [65]Hessian Research Cluster ELEMENTS, Giersch Science Center, Max-von-Laue-Straße 12, Goethe University Frankfurt, Campus Riedberg, D-60438 Frankfurt am Main, Germany The mergers of binary compact objects such as neutron stars and black holes are of central interest to several areas of astrophysics, including as the progenitors of gamma-ray bursts (GRBs), sources of high-frequency gravitational waves and likely production sites for heavy element nucleosynthesis via rapid neutron capture (the r-process). These heavy elements include some of great geophysical, biological and cultural importance, such as thorium, iodine and gold. Here we present observations of the exceptionally bright gamma-ray burst GRB 230307A. We show that GRB 230307A belongs to the class of long-duration gamma-ray bursts associated with compact object mergers, and contains a kilonova similar to AT2017gfo, associated with the gravitational-wave merger GW170817. We obtained James Webb Space Telescope mid-infrared (mid-IR) imaging and spectroscopy 29 and 61 days after the burst. The spectroscopy shows an emission line at 2.15 microns which we interpret as tellurium (atomic mass A=130), and a very red source, emitting most of its light in the mid-IR due to the production of lanthanides. These observations demonstrate that nucleosynthesis in GRBs can create r-process elements across a broad atomic mass range and play a central role in heavy element nucleosynthesis across the Universe. [ [ August 1, 2023 ================== GRB 230307A was first detected by the Fermi Gamma-ray Burst Monitor (GBM) at 15:44:06 UT on 7 Mar 2023 <cit.>. It was an exceptionally bright gamma-ray burst with a duration of T_90∼ 35 s and a prompt fluence of (2.951 ± 0.004) × 10^-3 erg cm^-2 in the 10-1000 keV band <cit.> (Figure <ref>). These properties place this event at the peak of the distribution of the class of “long-soft" GRBs. The measured fluence makes it the second-brightest GRB ever detected <cit.>. In addition to Fermi, the burst was also detected by several other high-energy instruments (see Methods). The multiple detections enabled source triangulation by the InterPlanetary Network (IPN). The Neil Gehrels Swift Observatory (hereafter Swift) tiled the initial ∼2 sq. deg IPN region <cit.> which revealed one candidate X-ray afterglow <cit.> consistent with the final 8 sq. arcmin IPN localization <cit.>. We obtained optical observations of the field of GRB 230307A with the ULTRACAM instrument, mounted on the 3.5m New Technology Telescope (NTT) at La Silla, Chile. Visual inspection of the images compared to those obtained with the Legacy Survey <cit.> revealed a new source coincident with the Swift X-ray source, and we identified it as the optical afterglow of GRB 230307A <cit.>. This identification was subsequently confirmed via imaging by several additional observatories <cit.>. The location was also serendipitously imaged by the Transiting Exoplanet Survey Satellite (TESS) from 3 days before to 3 days after the GRB <cit.>. Following the precise localisation, we obtained extensive follow-up observations, in the optical and near infrared with the Gemini South Telescope and the Very Large Telescope (VLT); in the X-ray with the Swift/XRT and the Chandra X-ray observatory; and in the radio with the Australia Telescope Compact Array (ATCA) and MeerKAT in South Africa. This campaign included spectroscopy with the VLT X-shooter instrument, as well as the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph. The latter provides redshift information for many galaxies in the field, including, in particular, a bright spiral galaxy at z=0.0646 ± 0.0001 offset 30.2 arcseconds (38.9 kiloparsec in projection) from the burst position (Figure <ref>). Of the optically detected galaxies in the field, this is the one with the lowest probability of being an unrelated chance alignment, and hence is most likely to be the host (see also <cit.>). Our ground-based campaign included imaging from 1.4 to 41 days after the burst (see Supplementary Information Table <ref>). At 11 days, infrared observations demonstrated a transition from an early blue spectral slope, to a much redder one with i-K > 2.9(AB). This extremely red colour appeared similar to the expectations for a kilonova, powered by the decay of unstable isotopes of heavy elements synthesised by rapid neutron capture within the ejecta produced during the merger of a neutron star and another compact object <cit.>. Based on this detection, we requested James Webb Space Telescope (JWST) observations (GO 4434, 4445, PI Levan), which were initiated on 5 April 2023. At the first epoch (+28.9 days after GRB), we took 6-colour observations with the Near Infrared Camera (NIRCam) (Figure <ref>), as well as a spectrum with the Near Infrared Spectrograph (NIRSpec) covering 0.5-5.5 microns (Figure <ref>). The NIRCam observations reveal an extremely red source that is only weakly detected in the bluer bands, where F150W(AB) = 28.11 ± 0.12 mag, but rises sharply through the mid-IR to F444W(AB) = 24.4 ± 0.01 mag. The NIRSpec observations also exhibit this steep rise. A faint galaxy is detected in these data at z=3.87, offset approximately 0.3 arcseconds from the burst position. However, the burst's properties are inconsistent with an origin at this redshift, in particular because the implied isotropic equivalent energy release would exceed all known GRBs by an order of magnitude or more (see Supplementary Information). A second epoch of JWST observations was obtained approximately 61 days after the burst. These observations showed that the source had faded by 2.4 magnitudes in F444W, demonstrating a rapid decay expected in the kilonova scenario and effectively ruling out alternatives (see Supplementary Information). Some of the burst properties are remarkably similar to those of the bright GRB 211211A, which was also shown to be accompanied by a kilonova <cit.>. In particular, the prompt emission consists of a hard pulse lasting for ∼ 19 seconds, followed by much softer emission (Figure <ref>). The prompt emission spectrum is well modelled by a double broken power-law with two spectral breaks moving rapidly through the gamma-ray band (see Methods), suggesting a synchrotron origin of the emission <cit.>. The X-ray afterglow is exceptionally faint, much fainter than most bursts when scaled by the prompt GRB fluence (see Figure <ref> and Supplementary Information). The development of the optical and IR counterpart is also similar to GRB 211211A, with an early blue colour and a subsequent transition to red on a timescale of a few days. In Figure <ref>, we plot the evolution of the counterpart compared with the kilonova AT2017gfo <cit.>, identified in association with the gravitational-wave detected binary neutron star merger, GW170817 <cit.>. AT2017gfo is the most rapidly evolving thermal transient ever observed; much more rapid than supernovae or even, for example, fast blue optical transients <cit.>. The counterpart of GRB 230307A appears to show near identical decline rates to AT2017gfo both at early times in the optical and IR, and later in the mid-IR <cit.>. These similarities are confirmed by a joint fit of afterglow and kilonova models to our multi-wavelength data (see Supplementary Information). Our JWST observations rule out supernovae: for any redshift z<1, a supernova would need to be more than 100 times fainter than the canonical GRB-supernova, SN 1998bw, to be compatible with our observations. Therefore, we conclude that GRB 230307A is a long-duration GRB formed from a compact object merger. This falls into a class that includes GRB 211211A <cit.>, GRB 060614 <cit.>, GRB 111005A <cit.> and GRB 191019A <cit.>, among others. The JWST observations provide a view of a kilonova in the mid-IR with high spatial resolution and sensitivity. On timescales of ∼ 30 days, it is apparent that the kilonova emits almost all of its light in the mid-infrared, beyond the limits of sensitive ground-based observations (effectively limited to below 2.5 microns). This is consistent with previous model predictions <cit.>, but has not previously been observationally confirmed. Late-time studies of such emission in the nebular phase must therefore be conducted in the mid-IR. Strikingly, despite its powerful and long-lived prompt emission that stands in stark contrast to GRB 170817A, the GRB 230307A kilonova is remarkably similar to AT 2017gfo. This was also the case for the long-lived merger GRB 211211A <cit.>, and suggests the kilonova signal, particularly the red component, is relatively insensitive to the GRB. Our NIRSpec spectrum shows a broad emission feature with a central wavelength of 2.15 microns, visible in both epochs of JWST spectroscopy. At longer wavelengths, the spectrum displays a slowly rising continuum up to 4.5 microns followed by either an additional feature or change of spectral slope. The colours of the counterpart at this time can be readily explained by kilonova models (see Supplementary Information Section <ref>). A similar emission-like feature is also visible in the later epochs of X-shooter observations of AT2017gfo <cit.>, measured at 2.1 microns by <cit.>. This further strengthens both the kilonova interpretation and the redshift measurement of GRB 230307A (Figure <ref>). We interpret this feature as arising from the forbidden [Te III] transition between the ground level and the first fine structure level of tellurium, with an experimentally-determined wavelength of 2.1044 microns <cit.>. The presence of tellurium is plausible, as it lies at the second peak in the r-process abundance pattern, which occurs at atomic masses around A ≈ 130 <cit.>. It should therefore be abundantly produced in kilonovae, as is seen in hydrodynamical simulations of binary neutron star mergers with nucleosynthetic compositions similar to those favoured for AT2017gfo <cit.>. Furthermore, the typical ionization state of Te in kilonova ejecta is expected to be Te III at this epoch because of the efficient radioactive ionization <cit.>. Tellurium has recently been suggested as the origin of the same feature in the spectrum of AT2017gfo <cit.>. A previous study <cit.> also identified this tellurium transition and noted that the observed feature is most likely two blended emission lines. However, alternative transitions from heavy r-process elements have been considered for this feature <cit.>. Tellurium can also be produced via the slower capture of neutrons in the s-process. Indeed, this line is also seen in planetary nebulae <cit.>. The detection of [Te III] 2.1 μ m extends on the earlier detection of strontium, a first r-process peak element, in the early time photospheric emission of AT2017gfo <cit.>. The mass of Te III estimated from the observed line flux is ∼ 10^-3M_⊙ (see Supplementary Information <ref>). Although weaker, we also note that the spectral feature visible at 4.5 microns is approximately consistent with the expected location of the first peak element selenium and the near-third peak element tungsten <cit.>. Detailed spectral fitting at late epochs is challenging because of the breakdown of the assumptions regarding local thermodynamic equilibrium (LTE) which are used to predict kilonova spectra at earlier ages, as well as fundamental uncertainties in the atomic physics and electron transitions in the highly complex electron shells of r-process elements. However, these observations provide a calibration sample for informing future models. The red continuum emission indicates a large opacity in the mid-IR at low temperatures, e.g., ∼ 10 cm^2g^-1 at ∼ 700 K. Since the Planck mean opacity under LTE is expected to be less than 1 cm^2g^-1 at ≲ 1000 K <cit.>, this large opacity may suggest that lanthanides are abundant in the ejecta. Indeed, it has been shown that non-LTE effects can increase the lanthanide opacity in mid-IR at late times <cit.>. Therefore, systematic studies of non-LTE opacity are necessary to answer the question whether lanthanides are the origin of the red emission at this epoch. A fit to our combined MUSE + JWST data for the host galaxy (Supplementary Information) suggests a relatively low mass (∼ 2.5 × 10^9 M_⊙) dominated by an older ∼ 10 Gyr population, but with a second more recent burst of star formation. These properties are entirely consistent with the population of short GRB host galaxies <cit.>. The host normalised offset of the burst from the host galaxy places it in the top 10% of those seen for short GRBs <cit.>. The offset could readily be achieved by a binary with a velocity of a few hundred km s^-1 and a merger time of hundreds of millions of years. Alternatively, in the second epoch of JWST observations, a faint source is detected in the F150W images at the location of the transient. This may represent continued emission from the transient. However, its absolute magnitude of M_F150W ∼ -8.5 is comparable to the absolute magnitude of globular clusters in which dynamical interactions could be at play to create merging systems at enhanced rates <cit.>. Future observations should readily distinguish between a fading afterglow or underlying cluster. It is striking that GRB 230307A is an extremely bright GRB, with only the exceptional GRB 221009A being brighter <cit.>. Of the ten most fluent Fermi/GBM GRBs, two are now associated with kilonovae (230307A and 211211A), three are associated with supernovae, and the nature of the remaining five appears unclear (see Table <ref>). For bright GRBs, there may be a significant contribution from mergers. Indeed, such a conclusion can also be reached by considering the energetics. Both GRB 230307A and GRB 211211A have isotropic equivalent energies of E_iso > 10^51 erg. The majority of local GRBs for which the connection between GRBs and core-collapse supernovae has been established are much less energetic (typically E_iso > 10^49-50 erg) and it has been suggested that they represent a separate population powered via different processes <cit.>. For more energetic bursts in the local Universe (where supernovae can still readily be discovered) the fraction of long GRBs with and without supernovae appear similar (see Supplementary Information). If a substantial number of long GRBs are associated with compact object mergers, they provide an essential complement to gravitational-wave (GW) detections. Firstly, joint GW-GRB detections, including long GRBs, can push the effective horizons of GW detectors to greater distances and provide much smaller localisations. In the case of GRB 230307A, the distance of 300 Mpc could have provided a robust detection in gravitational waves for the relevant O4 sensitivity <cit.>. Secondly, long GRBs can be detected without GW detectors, providing a valuable route to enhancing the number of kilonova detections. Thirdly, JWST can detect kilonova emission at redshifts substantially beyond the horizons of the current generation of GW detectors, enabling the study of kilonovae across a greater volume of the Universe. The duration of the prompt γ-ray emission in these mergers remains a challenge to explain. In particular, the natural timescales for emission in compact object mergers are much shorter than the measured duration of GRB 230307A. Previously suggested models that may also explain GRB 230307A include magnetars <cit.>, black hole - neutron star mergers <cit.>, or even neutron star - white dwarf systems <cit.>. Recent results have also shown that the jet timescale does not directly track the accretion timescale in compact object mergers, and that long GRBs may be created from very short lived engines <cit.>, and hence from binary neutron star mergers without magnetars. There is evidence that the kilonova in GRB 230307A produced elements across a wide range of atomic mass. The detection of second peak elements in the spectrum of a kilonova demonstrates that nuclei with atomic masses around A ∼ 130 are being created in the mergers of compact objects. Many second peak elements have important biological roles. For example iodine is essential for mammals and may have been used by the single cell Last Universal Common Ancestor <cit.>. The creation of these elements in compact object mergers, which can have long delay times, may have important consequences for the time at which certain evolutionary channels become plausible. § METHODS § OBSERVATIONS Below we outline the observational data that were used in this paper. Magnitudes are given in the AB system unless stated otherwise. We utilize cosmology resulting from the Planck observations <cit.>. All uncertainties are given at the 1σ level unless explicitly stated. §.§ gamma-ray observations GRB 230307A was first detected by Fermi/GBM and GECAM at 15:44:06 UT on 7 Mar 2023 <cit.>. It had a duration of T_90∼ 35s and an exceptionally bright prompt fluence of (2.951 ± 0.004) × 10^-3 erg cm^-2 <cit.>. The burst fell outside of the coded field-of-view of the Swift Burst Alert Telescope (BAT), and so did not receive a sub-degree localisation despite a strong detection. However, detections by Swift, GECAM <cit.>, STIX on the Solar Orbiter <cit.>, AGILE <cit.>, ASTROSAT <cit.>, GRBalpha <cit.>, VZLUSAT <cit.>, Konus-WIND <cit.> and ASO-HXI <cit.> enabled an enhanced position via the InterPlanetary Network to increasingly precise localisations of 1.948 deg^2 <cit.>, 30 arcmin^2 <cit.>, and ultimately to 8 arcmin^2 <cit.>. This was sufficiently small to enable tiling with Swift and ground-based telescopes. §.§.§ Fermi/GBM data analysis In Figure <ref>, we plot the light curve of GRB 230307A as seen by the Fermi/GBM in several bands, built by selecting Time Tagged Event (TTE) data, binned with a time resolution of 64 ms. The highlighted time interval of 3–7 s after trigger are affected by data loss due to the bandwidth limit for TTE data <cit.>. For the spectral analysis, we made use of the CSPEC data, which have 1024 ms time resolution. Data files were obtained from the online archive[<https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html>]. Following the suggestion reported by the Fermi Collaboration <cit.>, we analysed the data detected by NaI 10 and BGO 1, which had a source viewing angle less than 60^∘, and excluded the time intervals affected by pulse pile-up issues (from 2.5 s to 7.5 s). The data extraction was performed with the public software gtburst, while data were analysed with Xspec. The background, whose time intervals have been selected before and after the source, was modelled with a polynomial function whose order is automatically found by gtburst and manually checked. In the fitting procedure, we used inter–calibration factors among the detectors, scaled to the only NaI analysed and free to vary within 30%. We used the PG-Statistic, valid for Poisson data with a Gaussian background. The best-fit parameters and their uncertainties were estimated through a Markov Chain Monte Carlo (MCMC) approach. We selected the time intervals before and after the excluded period of 2.5-7.5 s due to instrumental effects. In particular, we extracted 2 time intervals from 0 to 2.5 s (1.25 s each) and 14 time intervals from 7.5 s to 40.5 (bin width of 2 s, except the last two with integration of 5 s to increase the signal-to-noise ratio), for a total of 16 time intervals. We fitted the corresponding spectra with the two smoothly broken power-law (2SBPL) function <cit.>, which has been shown to successfully model the synchrotron-like spectral shape of bright long GRBs, including the merger-driven GRB 211211A <cit.>. From our spectral analysis we found that all spectra up to ∼ 20 s are well modelled by the 2SBPL function, namely they are described by the presence of two spectral breaks inside the GBM band (8 keV–40 MeV). In particular, in the time intervals between 7.5 and 19.5 s, the low-energy break E_break is coherently decreasing from 304.3_-2.6^+5.2 keV down to 52.1_-5.1^+4.3 keV, and the typical ν F_ν peak energy E_peak is also becoming softer, moving from ∼ 1 MeV to 450 keV. The spectral indices of the two power-laws below and above the low-energy break are distributed around the values of -0.82 and -1.72, which are similar to the predictions for synchrotron emission in marginally fast-cooling regime (i.e. -2/3 and -3/2). This is consistent with what has been found in GRB 211211A <cit.>. We notice, however, that in all spectra the high-energy power-law above E_peak is characterised by a much softer index (with a mean value of -4.10 ± 0.24) with respect to the value of ∼ -2.5 typically found in Fermi GRBs. This suggests that the spectral data might require a cut-off at high energy, although further investigations are needed to support this. From 19.5 s until 40.5 s (the last time interval analysed), all the break energies are found to be below 20 keV, close to the GBM low energy threshold. In the same time intervals, the peak energy E_peak decreases from 682.4_-6.1^+3.2 to 123.1_-4.9^+5.4 keV, and the index of the power-law below the peak energy is fully consistent (mean value of -1.45 ± 0.06) with the synchrotron predicted value of -1.5. §.§ Optical observations §.§.§ NTT - Afterglow discovery Following the refinement of the IPN error box to an area of 30 arcmin^2 <cit.>, we obtained observations of the field of GRB 230307A with the ULTRACAM instrument <cit.>, mounted on the 3.5m New Technology Telescope (NTT) at La Silla, Chile. The instrument obtains images in 3 simultaneous bands, and is optimised for short exposure, low dead-time observations <cit.>. We obtained 10 × 20 s exposures in two pointings in each of the Super SDSS u, g and r, bands (where the Super SDSS bands match the wavelength range of the traditional SDSS filters but with a higher throughput; ). The observations began at 01:53:21 UT on 2023-03-09, approximately 34 hr after the GRB. The images were reduced via the HIPERCAM pipeline <cit.> using bias and flat frames taken on the same night. Visual inspection of the images compared to those obtained with the Legacy Survey <cit.> revealed a new source coincident with an X-ray source identified via Swift/XRT observations <cit.>, and we identified it as the likely optical afterglow of GRB 230307A <cit.>. The best available optical position of this source (ultimately measured from our JWST observations, see below) is RA(J2000) = 04:03:26.02, Dec(J2000) = -75:22:42.76, with an uncertainty of 0.05 arcseconds in each axis. The IPN error box and the footprint of the ULTRACAM observations are shown in Figure <ref>. This identification was subsequently confirmed via observations from a number of additional observatories, including <cit.>. We acquired two further epochs of observations with ULTRACAM on the following nights with 10 × 20s exposures in the Super SDSS u, g and i, bands. Aperture photometry of the source is reported in Table <ref>, and is reported relative to the Legacy survey for the gri bands, and to SkyMapper for the u-band. §.§.§ TESS The prompt and afterglow emission of GRB 230307A was detected by TESS, which observed the field continuously from 3 days before the Fermi trigger to 3 days after at a cadence of 200 s <cit.>. A reference image was subtracted from the observations to obtain GRB-only flux over this period. The measured flux in the broad TESS filter (600nm – 1000nm) is corrected for Galactic extinction and converted to the I_ c band assuming a power-law spectrum with F ∝ν^-0.8. We then bin the light curve logarithmically, taking the mean flux of the observations in each bin and converting to AB magnitudes. A systematic error of 0.1 magnitudes was added in quadrature to the measured statistical errors to account for the uncertainties in the data processing. These data are presented in Table <ref>. §.§.§ Swift/UVOT The Swift Ultra-violet/Optical Telescope <cit.> began observing the field of GRB 230307A ∼84.6 ks after the Fermi/GBM trigger <cit.>. The source counts were extracted using a source region of 5 arcsec radius. Background counts were extracted using a circular region of 20 arcsec radius located in a source-free part of the sky. The count rates were obtained from the image lists using the Swift tool uvotsource. A faint catalogued unrelated source also falls within the 5 arcsec radius, this will affect the photometry, particularly at late times. We, therefore, requested a deep template image in white in order to estimate the level of contamination. We extracted the count rate in the template image using the same 5 arcsec radii aperture. This was subtracted from the source count rates to obtain the afterglow count rates. The afterglow count rates were converted to magnitudes using the UVOT photometric zero points <cit.>. §.§.§ Gemini We obtained three epochs of K-band observations using the FLAMINGOS-2 instrument on the Gemini-South telescope. These observations were reduced through the DRAGONS pipeline to produce dark and sky-subtracted and flat-fielded images <cit.>. At the location of the optical counterpart to GRB 230307A, we identify a relatively bright K-band source in the first and second epochs, with only a upper limit in epoch 3. We report our photometry, performed relative to secondary standards in the VISTA hemisphere survey <cit.>, in Table <ref>. §.§.§ VLT imaging We carried out observations of the GRB 230307A field with the 8.2-m VLT telescopes located in Cerro Paranal, Chile. The observations were obtained with the FORS2 camera (mounted on the Unit Telescope 1, UT1, ANTU) in B, R, I and z bands at multiple epochs, and with the HAWK-I instrument (mounted on the Unit Telescope 4, UT4, Yepun) in the K band at one epoch. All images were reduced using the standard ESO (European Southern Observatory) Reflex pipeline <cit.>. The source was detected in the FORS2 z-band image at ∼6.4 days after the Fermi/GBM detection. A single r'-band observation of the GRB 230307A was also executed with the 2.6m VLT Survey Telescope (VST) after 2.37 days from the GRB discovery. In later observations the source was not detected (see bottom right panels of Fig. <ref>) and the upper limit values at 3σ level are reported in Table <ref>. §.§.§ VLT spectroscopy To attempt to measure the redshift of GRB 230307A and the nearby candidate host galaxies, we obtained spectroscopy with the VLT utilising both the X-shooter and MUSE instruments, mounted respectively on the Unit Telescope 2 (UT2, Kueyen) and on UT4 (Yepun). X-shooter spectroscopy, covering the wavelength range 3000–22000 Å was undertaken on 2023-03-15. Observations were taken at a fixed position angle, with the slit centred on a nearby bright star. X-shooter data have been reduced with standard esorex recipes. Given that only two of the four nod exposures were covering the GRB position, resulting in a total exposure time of 2400s on-source, we have reduced each single exposure using the stare mode data reduction. Then, we have stacked the two 2D frames covering the GRB position using dedicated post-processing tools developed in a python framework <cit.>. We further obtained observations with the MUSE integral field unit on 2023-03-23. The MUSE observations cover multiple galaxies within the field, as well as the GRB position, and cover the wavelength range 4750-9350 Å. MUSE data have been reduced using standard esorex recipes embedded within a single python script that performs the entire data reduction procedure. Later, the resulting datacube has been corrected for sky emission residuals using ZAP <cit.>. The MUSE observations reveal the redshifts for a large number of galaxies in the field, including a prominent spiral G1 at z=0.0646 <cit.> and a group of galaxies, G2, G3 and G4 at z=0.263. §.§ X-ray afterglow Swift began tiled observations of the IPN localisation region with its X-ray Telescope <cit.> at 12:56:42 on 8 Mar 2023[https://www.swift.ac.uk/xrt_products/TILED_GRB00110/https://www.swift.ac.uk/xrt_products/TILED_GRB00110/] <cit.>. XRT made the first reported detection of the afterglow (initially identified as `Source 2') with a count rate of 0.019 ± 0.004 cts ^-1 <cit.> and later confirmed it to be fading with a temporal power-law index of 1.1^+0.6_-0.5 <cit.>. XRT data were downloaded from the UK Swift Science Data Centre <cit.>. We further obtained observations with the Chandra X-ray observatory (programme ID 402458: PI Fong/Gompertz). A total of 50.26 ks (49.67 ks of effective exposure) of data were obtained in three visits between 31 March 2023 and 2 April 2023. The source was placed at the default aim point on the S3 chip of the ACIS detector. At the location of the optical and X-ray afterglow of GRB 230307A, we detect a total of 12 counts, with an expected background of ∼ 1, corresponding to a detection of the afterglow at >5 σ based on the photon statistics of <cit.>. To obtain fluxes, we performed a joint spectral fit of the Chandra and Swift/XRT data. The best fitting spectrum, adopting uniform priors on all parameters, is a power law with a photon index of Γ=2.50^+0.30_-0.29 when fitting with a Galactic N_H = 1.26 × 10^21 cm^-2 <cit.> and zero intrinsic absorption (neither XRT nor Chandra spectra have sufficient signal to noise to constrain any intrinsic absorption component). The resultant flux in the 0.3 – 10 keV band is F_X(1.7 d) = 4.91_-0.79^+0.89× 10^-13 erg cm^-2 s^-1 during the XRT observation and F_X(24.8 d) = 1.19_-0.62^+0.87× 10^-14 erg cm^-2 s^-1 during the Chandra observation. Due to the low count number, the Chandra flux posterior support extends to considerably below the reported median, with the 5th percentile being as low as F_X,5th = 3× 10^-15 erg cm^-2 s^-1. If a uniform-in-the-logarithm prior on the flux were adopted, this would extend to even lower values. Chandra and XRT fluxes are converted to 1 keV flux densities using the best fit spectrum (Table <ref>). §.§ ATCA Following the identification of the optical afterglow <cit.>, we requested target of opportunity observations of GRB 230307A (proposal identification CX529) with the Australia Telescope Compact Array (ATCA) to search for a radio counterpart. These data were processed using Miriad <cit.>, which is the native reduction software package for ATCA data using standard techniques. Flux and bandpass calibration were performed using PKS 1934–638, with phase calibration using interleaved observations of 0454-810. The first observation took place on 2023-03-12 at 4.46 d post-burst, which was conducted using the 4 cm dual receiver with frequencies centered at 5.5 and 9 GHz, each with a 2 GHz bandwidth. The array was in the 750C configuration[https://www.narrabri.atnf.csiro.au/operations/array_configurations/configurations.html] with a maximum baseline of 6 km. A radio source was detected at the position of the optical afterglow at 9 GHz with a flux density of 92 ± 22μJy but went undetected at 5.5 GHz (3σ upper limit of 84μJy). A further two follow-up observations were also obtained swapping between the 4 cm and 15 mm dual receivers (the latter with central frequencies of 16.7 and 21.2 GHz, each with a 2 GHz bandwidth). During our second epoch at 10.66 d we detected the radio counterpart again, having become detectable at 5.5 GHz with marginal fading at 9 GHz. By the third epoch, the radio afterglow had faded below detectability. We did not detect the radio transient at 16.7 or 21.2 GHz in either epoch. All ATCA flux densities are listed in Table <ref>. §.§ MeerKAT We were awarded time to observe the position of GRB 230307A with the MeerKAT radio telescope via a successful Director’s Discretionary Time proposal (PI: Rhodes, DDT-20230313-LR-01). The MeerKAT radio telescope is a 64-dish interferometer based in the Karoo Desert, Northern Cape, South Africa <cit.>. Each dish is 12 m in diameter and the longest baseline is ∼8 km allowing for an angular resolution of ∼7 arccsec and a field of view of 1 deg^2. The observations we were awarded were made at both L and S-band. GRB 230307A was observed over three separate epochs between seven and 41 days post-burst. The first two observations were made at both L and S4-band (the highest frequency of the five S-band sub-bands), centred at 1.28 and 3.06 GHz with a bandwidth of 0.856 and 0.875 GHz, respectively. Each observation spent two hours at L-band and 20 minutes at S4-band. The final observation was made only at S4-band with one hour on target. Please see the paper by MPIfR for further details on the new MeerKAT S-band receiver. Each observation was processed using oxkat, a series of semi-automated Python scripts designed specifically to deal with MeerKAT imaging data <cit.>. The scripts average the data and perform flagging on the calibrators from which delay, bandpass and gain corrections are calculated and then applied to the target. The sources J0408-6545 and J0252-7104 were used at the flux and complex gain calibrator, respectively. Flagging and imaging of the target field are performed. We also perform a single round of phase-only self-calibration. We do not detect a radio counterpart in any epoch in either band. The rms noise in the field was measured using an empty region of the sky and used to calculate 3σ upper limits which are given in Table <ref>. §.§ JWST observations We obtained two epochs of observations of the location of GRB 230307A with JWST. The first on 5 April 2023, with observations beginning at 00:16 UT (MJD=60039.01), 28.4 days after the burst (under programme GO 4434, PI Levan), and the second on 8 May 2023, 61.5 days after the burst (programme 4445, PI Levan). The observations were at a post-peak epoch because the source was not in the JWST field of regard at the time of the burst and only entered it on 2 April 2023. At the first epoch, we obtained observations in the F070W, F115W, F150W, F277W, F356W and F444W filters of NIRCam <cit.>, as well as a prism spectrum with NIRSpec <cit.>. In the second epoch we obtained NIRCam observations in F115W, F150W, F277W and F444W and a further NIRSpec prism observation. However, in the second epoch the prism observation is contaminated by light from the diffraction spike of a nearby star and is of limited use, in particular at the blue end of the spectrum. We therefore use only light redwad of 1.8 microns. However, even here we should be cautious in interpreting the overall spectral shape. However, the feature at 2.15 microns is visible in both the 29 and 61 day spectra. We reprocessed and re-drizzled the NIRCam data products to remove 1/f striping and aid point spread function recovery, with the final images having plate scales of 0.02 arcsec/pixel (blue channel) and 0.04 arcsec/pixel (red channel). In the NIRCam imaging we detect a source at the location of the optical counterpart of GRB 230307A. This source is weakly detected in all three bluer filters (F070W, F115W and F150W), but is at high signal-to-noise ratio in the redder channels (see Figure <ref>). The source is compact. We also identify a second source offset (H1) approximately 0.3 arcseconds from the burst location. This source is also weakly, or non-detected in the bluer bands, and is brightest in the F277W filter. Because of the proximity of the nearby star and a contribution from diffraction spikes close to the afterglow position we model point spread functions for the appropriate bands using WebbPSF <cit.>, and then scale and subtract these from star position. Photometry is measured in small (0.05 arcsec (blue) and 0.1 arcsec (red)) apertures and then corrected using tabulated encircled energy corrections. In addition to the direct photometry of the NIRCam images we also report a a K-band point based on folding the NIRSpec spectrum (see below), through a 2MASS, Ks filter. This both provides a better broadband SED and a direct comparison with ground based K-band observations. Details of photometric measurements are shown in Table <ref>. For NIRSpec, we utilise the available archive-processed level 3 two-dimensional spectrum (Figure <ref>). In this spectrum we clearly identify the trace of the optical counterpart, which appears effectively undetected until 2 microns and then rises rapidly. We also identify two likely emission lines which are offset from the burst position. These are consistent with the identification with Hα and [O iii] (4959/5007) at a redshift of z=3.87. Both of these lines lie within the F277W filter in NIRCam and support the identification of the nearby source as the origin of these lines. We extract the spectrum in two small (2 pixel) apertures. One of these is centred on the transient position, while the second is centred on the location of the emission lines. Since the offset between these two locations is only ∼ 0.2 arcseconds there is naturally some contamination of each spectrum with light from both sources, but this is minimised by the use of small extraction apertures. The resulting 1D spectra are shown in Figure <ref>. The counterpart is very red, with a sharp break at 2 microns and an apparent emission feature at 2.15 microns. The spectrum then continues to rise to a possible second feature (or a change in the associated spectral slope) at around 4.5 microns. § SUPPLEMENTARY INFORMATION § GRB 230307A IN CONTEXT §.§ Prompt emission GRB 230307A is an exceptionally bright GRB. It has the second highest fluence of any GRB observed in more than 50 years of GRB observations <cit.>. While it remains a factor of 50 less fluent than GRB 221009A, it is still a factor ∼ 2 brighter than GRB 130427A, the third brightest burst. Bursts with these extreme fluences are rare. In Figure <ref>, we plot the distribution of observed fluence for Fermi/GBM detected bursts. At the brighter end, the slope of the distribution is consistent with the expected -3/2 slope for a uniform distribution of sources. The extrapolation of this relation suggests that bursts like GRB 230307A should occur once every several decades. Notably, three bursts well above the extrapolation (GRB 130427A, GRB 230307A, GRB 221009A) may indicate that bright bursts arise more frequently than expected. However, observationally it is clear that GRB 230307A is, at least, a once-per-decade event. The prompt light curve of GRB 230307A (Figure <ref>) shows two distinct emission features: an initial episode of hard emission from the trigger until ≈ 18 s, then a softer episode from ≈ 19 s onwards. These distinct episodes of hard and soft emission are strongly reminiscent of the long-duration merger GRB 211211A, but the initial pulse complex is ∼ 50 per cent longer in GRB 230307A when compared to the ∼ 12 s duration seen in GRB 211211A <cit.>. The relative durations of the initial pulse complex in the two GRBs bear a striking resemblance to their relative time-averaged peak energies <cit.>. In GRB 211211A, substantial spectral evolution was seen to drive the light curve, and the underlying radiation mechanism was identified as fast-cooling synchrotron emission <cit.>. The coherent development of the hardness ratio (Figure <ref>, lower) indicated similar spectral evolution in GRB 230307A, which the spectral analysis confirmed. Indeed, as described in Section <ref>, the time-resolved spectral analysis of the prompt emission revealed the presence of two spectral breaks in the GBM band, E_break and E_peak, coherently becoming softer from 7.5 s up to 19.5 s. Also, in this case, the spectral indices indicate synchrotron emission in the marginally fast-cooling regime. From 19.5 s onwards (approximately when the softer and dimmer emission episode starts), the low-energy break E_break is continuously approaching the lower limit of the GBM band (8 keV), presumably crossing it to enter the X-ray regime. Unfortunately, the lack of simultaneous observations in X-rays with another telescope, e.g. Swift/XRT, prevents us from fully tracing the evolution of the spectral break down to X-rays at later times, as was done for GRB 211211A. The time-averaged Fermi/GBM spectrum of GRB 230307A across the T_90 interval is best fit with a cutoff power-law with α = 1.07 ± 0.01 and cutoff energy 936 ± 3 keV <cit.>. From this, we calculate a hardness ratio (the ratio of the 50 - 300 keV photon flux to the 10 - 50 keV photon flux) of 0.88^+0.01_-0.02. This is higher than the value for 211211A (0.57) but comfortably within the 1σ distribution of hardness ratios for canonical long GRBs (i.e. with T_90 > 2 s) in the Fermi catalogue, which we calculate to be 0.66^+0.51_-0.29 from the data in <cit.>. Like GRB 211211A before it, GRB 230307A appears to have `typical' long GRB properties in terms of its time-averaged hardness ratio and its T_90. This strengthens the case for a significant number <cit.> of long-duration GRBs having been mistakenly identified as stellar collapse events. However, in some ways, GRB 230307A differs significantly from several of the other brightest GRBs. For example, the afterglow was relatively faint, while the burst was very bright. In Figure <ref>, we plot the prompt fluence in the 15-150 keV band against the X-ray afterglow brightness at 11 hours <cit.>. The general trend between the afterglow brightness and fluence is seen; the best-fit slope to this relation is approximately one. So, while there is substantial scatter, there is a direct proportionality between the fluence and the afterglow brightness. Notably, while the afterglow and prompt emission of GRB 221009A were exceptionally bright (after correcting for the heavy foreground extinction), they were in keeping with this relatively broad relationship. GRB 230307A is different. Here we extrapolate the X-ray flux to 11 hours based on the measured X-ray flux at ∼ 1 day and the decay slope. We also re-calculate the GRB 230307A fluence in the relevant 15-150 keV energy band for comparison to Swift/BAT. This burst is a notable outlier in the relation, with a faint X-ray flux for its extraordinary prompt brightness. The afterglow brightness depends both on the energy of the burst and the density of the interstellar medium; it is, therefore, possible that the location in this fluence – afterglow brightness plane is indicative of a low-density medium, which would be consistent with expectations for such a large GRB - host offset. It is also of interest that another burst in a similar location is GRB 211211A. This long burst has a clear signature of kilonova emission within its light curve. If GRB 230307A is a similar event, faint afterglows (relative to the prompt emission) may be an effective route for disentangling mergers from collapsars. To further compare the ratio between the X-ray brightness and the γ-ray fluence, we retrieve the X-ray light curve of all Swift-detected GRBs from the Swift Burst Analyser <cit.> and limit the sample to 985 long GRBs and 55 short GRBs with at least two XRT detections and measured BAT fluence. The fluences are taken from the Swift/BAT Gamma-Ray Burst Catalog[https://swift.gsfc.nasa.gov/results/batgrbcat/index_tables.htmlhttps://swift.gsfc.nasa.gov/results/batgrbcat/index_tables.html] <cit.> and represent the measurements from 15 to 150 keV integrated over the total burst duration. We add to this sample the GRBs 170817A <cit.> and 221009A <cit.>. For the former, we retrieve the X-ray light curve from <cit.> and use a γ-ray fluence of 2.4×10^-7  erg cm^-2 <cit.>. For the latter, we take the X-ray light curve from the Swift Burst Analyser and assume a fluence of 0.007  erg cm^-2 (corrected from the 1-10000 keV fluence in <cit.> to the 15-150 keV band). Following <cit.>, we resample the X-ray light curves and normalise them by the γ-ray fluence on a grid defined by the observed F_ X / Fluence ratios and the time-span probed by the data. If no data are available at a specific time of the grid, we linearly interpolate between adjacent observations but do not extrapolate any data. Hence the paucity of observations at later times reflects the last time at which sources were detected by the Swift/XRT. Short and long GRBs occupy the same part of the F_ X / Fluence vs time parameter space (Figure <ref>). In contrast, GRB 230307A has an unprecedentedly low F_ X / Fluence ratio that is almost 10-fold lower than the faintest GRBs at the same time. To emphasise the uniqueness of GRB 230307A, we also show in the same figure the Swift/BAT-detected GRBs 050925, 051105A, 070209, 070810B, 100628A, 130313A, 170112A that evaded detection with Swift/XRT. The limits on their F_ X / Fluence ratio (shown by downward pointing triangles in that figure) are consistent with the observed range of F_ X / Fluence ratios, ruling out a selection bias against GRBs with lower than usual F_ X / Fluence. Intriguingly, GRBs 080503, 191019A and 211211A had markedly low F_ X / Fluence ratios during the shallow decline phase of their X-ray light curves. Furthermore, GRB 211211A reached a value of 1.2×10^-9  s^-1 at 120 ks, comparable to GRB 230307A. §.§ Counterpart Evolution Although the afterglow of GRB 230307A was promptly detected thanks to TESS, this data was not available to the community for several days. Further follow-up was, therefore, much slower, and the counterpart was not discovered until the localisation was narrowed down to several sq. arcminutes, approximately 24 hours after the burst. The result is that the counterpart is poorly sampled (particularly in colour) during the early phases, while later observations suffer from typically modest signal-to-noise. The TESS observations detected a relatively bright (though not exceptional given the fluence of the burst) outburst, coincident with the prompt emission, likely peaking at I<15 <cit.>. The afterglow was much fainter, apparently no brighter than I=18 in the minutes to hours after the burst was detected. It was relatively flat during this period, with a power-law through the first to last TESS observations decaying as F(t) ∝ t^-0.2. The TESS and ground-based observations can be consistently modelled with a forward shock afterglow + kilonova (see Section <ref>). There are no simultaneous colours at the time of the first ground-based afterglow detections (1.4 days), although extrapolation of the r-band detection with ULTRACAM to the WHITE detection with the Swift/UVOT suggests a relatively red colour (WHITE-r = 1.6 ± 0.4) . However, such an interpretation is difficult due both to the large photometric errors and the width of the WHITE filter on the Swift/UVOT. Optical observations obtained multiple colours at an epoch ∼ 2.4 days post-burst. These show the afterglow to have a blue colour with g=22.35 ± 0.26, i=21.68 ± 0.09 and z=21.8 <cit.>. This is consistent with GRB afterglows in general(i.e F_ν∝ν^-β gives β≈ 1). Observations in the near infrared (NIR) were not undertaken until ∼ 10.4 days post-burst. However, these reveal a relatively bright K-band source. The inferred i-K(AB) > 2.9 at this epoch is very red. Interpreted as a change in the spectral slope, it is β≈ 2.5. The K-band light hence appears to be in significant excess with respect to the afterglow expectations based both on optical data and on the X-ray light curve. It is relevant to consider if such an excess could arise via extinction. However, this is not straightforward to explain. For a generic β=1 slope we expect i-K(AB) ≈ 1.1. At z=0, to obtain i-K(AB) = 2.9 would correspond to a foreground extinction of A_V ≈ 4. However, this would also predict g-i ≈ 3, which is entirely inconsistent with the earlier observations. This problem becomes more acute for higher redshifts, where the bluer bands probe increasingly into the UV. The IR excess becomes extremely prominent by the time of the JWST observations. At 28.5 days, the source is detected in all bands but is very faint in the NIRCam blue channel (F070W, F115W, F150W) and rises rapidly (in F_ν) through the redder bands (F277W, F356W, F444W). Expressed as a power-law, this is β≈ 3.1 in the 2-5 micron region, and β≈ 1 between 0.7-1.5 microns. This does not match the expectations for any plausible spectral break in a GRB afterglow or any plausible extinction (where one would expect the slope to steepen towards the blue). This strongly implies that the red excess seen in the K-band at ten days and with JWST at 28.5 days is some additional component. Indeed, in the JWST observations, the other component, beginning at around 2 microns, is very clearly visible in both photometry and spectroscopy. This component evolves exceptionally rapidly. In the K-band, the inferred decay rate from 11.5 to 28.5 days is ∼ t^-3.5 expressed as a power-law or ∼ 0.25 mag per day, if exponential. This is much faster than observed in GRB afterglows or supernovae. It is, however, consistent with the expectations for kilonovae. As shown in Figure 4, the overall evolution shows substantial similarity with AT2017gfo. To constrain the temporal and spectral evolution within a plausible physical model more accurately, we fit the multi-band photometry with afterglow and kilonova models. The outputs of these models are described in detail in section <ref>. §.§ Identification of the host galaxy Deep optical imaging of the field identifies several relatively bright galaxies in the vicinity of the sky position of GRB 230307A. Our preferred host galaxy is the brightest of these, which we denote as G1. It lies at z=0.065 and is offset 30 arcseconds (40 kpc in projection) from the location of the afterglow. Following the method of <cit.> this galaxy has a probability of chance alignment of P_ chance∼ 0.09 (see also <cit.>). Although this is not extremely low, and so is only suggestive of a connection to the transient, we note that i) the luminosity of the late time counterpart at this redshift is very similar to AT2017gfo and ii) the spectral feature seen at 2.1 microns in AT2017gfo matches with the emission feature seen in the JWST spectroscopy of GRB 230307A. This is a broad line, but assuming they have the same physical origin, they fix the redshift to the range 0.04 < z < 0.08. G1 is the only galaxy within this range in the field. The physical properties of this galaxy are outlined in section <ref>. Our MUSE observations provide redshifts for this galaxy and several others, also identifying a small group of galaxies (G2, G3, G4) at a common redshift of z=0.263. All of these galaxies have P_ chance values substantially greater than our preferred host. Furthermore, because of the larger redshifts, the implied offsets from GRB 230307A are ≫100kpc. This is larger than seen for any short GRB with a firmly identified host. We, therefore, disfavour these as plausible host galaxies for GRB 230307A. Deep JWST observations reveal no evidence of a directly underlying host galaxy for GRB 230307A, as would be expected if it had a collapsar origin. In particular, at late times, the faint source at the counterpart's location is consistent with a point-source (i.e. a subtraction of the PSF constructed by WebbPSF yields no significant residuals). However, we identify a faint galaxy, undetected in the blue and with F277W =27.9 ± 0.1, offset only 0.3 arcseconds from the burst position. We designate this galaxy H1. Our NIRSpec observations provide a redshift of z=3.87 for H1 based on the detection of [O III] (5007) and Hα. At this redshift, the offset is only ∼ 1.3 kpc. Although many z ∼ 4 galaxies are extremely compact <cit.>, it seems likely that some stellar population from this galaxy does extend under the burst position, and there may be marginal evidence for extension in this direction in the F444W image. However, this region is neither UV-bright nor an emission line region where one may expect to observe massive stars. The galaxy photometry, performed in 0.1-arcsecond apertures and subsequently corrected for encircled energy assuming point-source curves is F070W>29.0, F115W=28.4 ± 0.3, F150W=28.6 ± 0.4, F277W=27.9 ± 0.1, F444W=28.3 ± 0.1, and the galaxy is only robustly detected in the redder bands (see Figure 2). We note that because of the proximity of the afterglow, we use a smaller aperture than may be optimal, although the galaxy is also compact. We can estimate the probability of chance alignment of this source with the GRB position via various routes. In principle, one can use number counts of galaxies on the sky in the multiple bands. These have recently been updated based on the first observations with JWST to provide number counts in appropriate bands <cit.>. We find that P_ chance, following the approach of <cit.> to be in the range ∼ 3-6 % for F277W and F444W (with no bound in the filters where the galaxy is undetected). Alternatively, we also estimate the probability directly from the data. We extract sources within the field via Source Extractor to create a mask of objects within the field. In the brightest detection (F277W) approximately 5% of the image is covered with objects of equal or brighter magnitude to H1, and we note that the burst position is not contained within this mask. This suggests that in this particular field, P_ chance > 5%. The absolute magnitude of H1 is M_i ∼ -17.7, and the Hα star formation rate is approximately 1 M_⊙ yr^-1. The half-light radius of the galaxy is approximately 0.1 arcseconds (700 pc). Although limited information is available, these values are generally consistent with those of the long GRB population. The burst offset from its host galaxy is ∼ 2.5 half-light radii. This is large but within the range seen for long-GRBs <cit.>. In our X-shooter and MUSE observations there is no trace visible in 1D or 2D extractions at the source position, although a weak continuum is seen in the X-shooter spectrum when heavily binned. This is consistent with its faint magnitude at the time of the observations. At the location of Lyα at z=3.87 we place limits of F < 2.5 × 10^-17 erg s^-1 cm^-2 assuming an unresolved line. We also examined both spectra for any emission lines at other redshifts. This is worthwhile given the strong emission lines often seen in long GRB hosts <cit.>, which may make emission line redshifts possible, even if the host itself is undetected. However, despite deep observations, there are no visible emission lines consistent with no directly underlying host galaxy, consistent with a compact object merger, but not a collapsar. Unsurprisingly, there are also numerous faint galaxies in the JWST images. However, all of these have large P_ chance values, and we do not consider them plausible host galaxies. Taken a face value, the probability of chance alignment for G1 (our preferred host) and H1 (z=3.87) is similar. However, the luminosity, lightcurve evolution and spectroscopic feature at the redshift of G1 offer strong support for it as the host galaxy of GRB 230307A. Furthermore, there is no straightforward, reasonably viable physical model that could explain the burst's extreme properties at z=3.87. This scenario would require extreme energetics, exceptionally rapid evolution and yields unphysical outcomes in standard GRB or supernovae scenarios. We outline this in detail in section <ref>. §.§ Host galaxy properties To better understand the properties of G1, the likely GRB host galaxy, we performed a fit to both the MUSE spectrum and photometric measurements from the far-UV to the mid-IR. For the photometric measurements, we retrieved science-ready coadded images from the Galaxy Evolution Explorer (GALEX) general release 6/7 <cit.>, DESI Legacy Imaging Surveys (LS; <cit.>) data release 9, and re-processed WISE images <cit.> from the unWISE archive <cit.>[http://unwise.mehttp://unwise.me]. The unWISE images are based on the public WISE data and include images from the ongoing NEOWISE-Reactivation mission R7 <cit.>. We measured the brightness of the galaxy G1 using the Lambda Adaptive Multi-Band Deblending Algorithm in R (LAMBDAR <cit.>) and the methods described in <cit.>. We augment the SED with Swift/UVOT photometry in the u band and our 6-band JWST/NIRCAM photometry. The photometry on the UVOT images was done with uvotsource in HEASoft and an aperture encircling the entire galaxy. For JWST photometry, we used a 6-arcsec circular aperture, which allows us to gather all the observed light observed in JWST filters from the host galaxy. All measurements are summarised in Table <ref>. To derive the main physical properties of the host galaxy, such as its stellar mass, we employ two separate methodologies based on the photometric and spectroscopic data available for the host, and finally compare the results to assess the robustness of our conclusions. We first fit the multi-wavelength (0.1–4.4 μm) dataset using the python package <cit.>, which allows us to model the host galaxy spectrum starting from its main constituents, namely a set of stellar population base spectra, built from the Flexible Stellar Population Synthesis (FSPS) package <cit.>, and combined with a specific star-formation history (SFH) model. Moreover, we have also considered a fixed attenuation model based on the Calzetti <cit.> attenuation curve, and an additional nebular model originating from the gas component, which is built using the photo-ionization code <cit.>, and considering the FSPS stellar population as ionising sources. We have adopted a parametric SFH model, which is described by a delayed-exponential model where the star-formation rate varies as a function of time t = t_age - t_lt, with t_lt being the lookback time <cit.>, as ∝ (t/τ) exp(-t/τ), with τ being the e-folding time. We finally have used the <cit.> ensemble sampler to reconstruct the posterior distribution. The results of the prospector analysis are shown in Fig. <ref>. We obtain a mass value of the living stars of M_* = 2.37 (+0.24,-0.35) × 10^9 M_⊙ yr^-1. The mass of all stars ever formed is 0.20 (+0.02,-0.04) dex larger. The light-weighted stellar age resulting from the fit is 1.13 (+1.49,-0.36) Gyr. An alternative to parametric SED fits is to use synthetic stellar population SEDs as templates and combine them to fit the galactic spectra (the underlying assumption being instantaneous star formations rather than continuous functions of time). We can use the spectral synthesis from the BPASSv2.2.2<cit.> binary populations and create templates with hoki<cit.> that are compatible with the ppxf fitting package<cit.>, as described in <cit.>. Because SED fitting has a high level of degeneracies (see <cit.>), at first we do not fit all 13 BPASS metallicities at once with ppxf, as this can result in unphysical results (see discussion in ); instead we fit the metallicities individually to find which ones result in the best fits on their own. We find that a low Z (0.001) population and solar metallicity population (Z=0.014) result in decent fits, but the low metallicity population fails to predict a young stellar component that is seen in the images, whilst the solar metallicity fit fails to accurately match the Hβ and neighbouring absorption features in the blue part of the spectrum. So we then fit the galaxy simultaneously with Z=0.001 and Z=0.014 templates, and retrieve a good fit shown in Figure <ref> alongside the recovered SFH. We find evidence of three main stellar populations: >95% of the mass is found in lower metallicity (Z=0.001) stars with ages ranging from a few Gyr to 10 Gyr, with a peak of star formation around 5 Gyr; >4.7% of the mass originates from a solar metallicity population (Z=0.014) that formed around 400 Myr ago; finally a small fraction (<0.05%) of the stellar mass in the host originates from the star-forming regions with ages a few Myrs. The details of the age distributions and exact metallicity values can be model dependent so we also fit the integrated galaxy spectrum with the single stellar population synthesis code STARLIGHT <cit.>, which uses stellar populations based on 25 different ages and six metallicity values <cit.>, and a Chabrier IMF <cit.>. The SFH retrieved by this method is more complex and would require odd configurations (including some high metallicties at old ages and low metallicities around 100 Myr, which is counter-intuitive, unless inflow from pristine gas will trigger a burst of SFR), but it also finds that overall the galaxy is dominated by an old population with lower metallicity and has a younger component at higher metallicity. In Figure <ref> we show a comparison of the STARLIGHT and BPASS fits in the bottom left panel and see that they are very similar, despite STARLIGHT containing 6 different metallicities and assuming solely single star populations. This highlights the level of degeneracy we face when performing galaxy SED fits. We leave further comparisons to a follow-up study dedicated to the host and the progenitor populations of GRB 230307A, where we will also present detailed, specially resolved, fits to the datacube including its kinematics. For now we use the BPASS integrated fits to infer the stellar mass and the star formation rate of the host of GRB 230307A, as the fit and SFH is more convincing that the one obtained with STARLIGHT. We find that there are currently M_*=1.65×10^9 M_⊙ of living stars (corresponding to 3.1 ×10^9 M_⊙ at ZAMS) in G1. Using the nebular component retrieved from subtracting the fit of the stellar component to the observed data, we can also estimate the star formation rate and metallicity. From the Hα feature we estimate that the SFR is 5.47± 0.30 × 10^-1 M_⊙ yr^-1 using the Kennicutt formulation <cit.>, and using the N2 index, in the CALIFA formulation <cit.>, we infer an oxygen abundance of 8.20 ± 0.16 (12 + log(O/H)). There are qualitative similarities between the host of GRB 230307A and NGC 4993<cit.>, the host of the first confirmed kilonova (they are both dominated by an older stellar populations and include a younger more metal rich component), but there are some key differences: NGC 4993 was a lenticular galaxy without a clear young component, whereas the host of GRB 230307A shows clear spiral arms and star forming regions. Another major difference is that the metallicity of the old population in this galaxy is 10 times lower than that of NGC 4993 (Z=0.001 compared to Z=0.010), which will influence the stellar evolution of potential progenitors. Finally, NGC 4993 had a large stellar (and presumably dark halo) mass M_* ≈ 10^11 M_⊙ <cit.>, a factor of >50 larger than the host of GRB 230307A. The location of GRB 230307A relative to its host galaxy is consistent with these properties. In particular, the low mass of the galaxy suggests a modest gravitational potential such that binaries with velocities of a few hundred km s^-1 can readily escape. The large offset also suggests that the binary is formed from the older stellar population. §.§ Properties of the brightest GRBs GRB 230307A is the second brightest[We use brightest here as an indicator of the total fluence in the prompt emission] burst observed in over 50 years of observations <cit.>. If it arises from a compact object merger, this implies that such bright bursts can be created in mergers. Indeed, such a picture appears likely based on GRB 211211A <cit.>, the sixth brightest burst. Of the ten brightest bursts observed by the Fermi/GBM, and subsequently localised at the arcsecond level, three have apparently secure associations with supernovae (GRB 130427A, GRB 171010A, GRB 190114C), and two (GRB 211211A, GRB 230307A) are associated with kilonovae, and hence mergers. Of the remaining five, one lies at z=1.4 and has energetics which suggest a collapsar; three have no redshift information, although one of these (GRB 160821A) lies in proximity to several galaxies at z=0.19; and one is GRB 221009A whose associated with a supernova remains unclear <cit.>, although recent observations suggest a collapsar with an associated supernova is most likely <cit.>. Within this very bright population, collapsars are likely as common as mergers. § EVENT RATES One key question of interest is the likely event rate for such merger GRBs. A simple estimate of the event rate associated to a single event is given by R = 1/Ω t V_max. Here Ω reflects the fraction of the sky covered by the detection mission, t the effective mission duration (accounting for the duty cycle) and V_max the maximum co-moving volume within which a burst with the same properties could be identified. For GRB 230307A, Ω=0.65 (average for the Fermi/GBM) and t ≈ 15 years. V_max is more complicated: as shown in Figure <ref>, the fluence distribution for GBM bursts extends to ∼ 10^-8 erg cm^-2 and is likely complete to around 10^-6 erg cm^-2. Given the extreme brightness of GRB 230307A, it would likely have been recovered to a distance ∼ 50 times greater than its observed distance. If at z=0.065 the inferred z_max=2.03 or V_max = 630 Gpc^-3. In this case, the inferred rate of such bursts becomes extremely small, R ≈ 1.6 × 10^-4 Gpc^-3 yr^-1. However, in practice, such bursts would not readily be identified at such redshifts since neither supernova nor kilonova signatures could be observed. A more realistic estimate would correspond to the distance at which associated supernovae can be either identified or ruled out with moderate confidence. In this case z_max = 0.5 (also adopted by <cit.>), V_max = 29 Gpc^3, and R ≈ 3.5 × 10^-3 Gpc^-3 yr^-1. These rate estimates also assume that GRB 230307A is the only merger-GRB to have occurred within the 15-year lifetime of the Fermi/GBM. This is almost certainly not the case. Indeed, GRB 211211A was also identified by Fermi/GBM and has rather similar estimates of the intrinisc rate <cit.>. However, even the interpretation of ∼ 2 events is problematic. In particular, the V/V_max for GRB 230307A is 0.004, and for GRB 211211A =0.005 (again assuming z_max = 0.5). For a sample average of uniformly distributed sources of comparable energy or luminosity, we expect V/V_max∼ 0.5). That the initial identification of such a population should arise from bursts with such extreme V/V_max values is surprising, but may reflect that these bursts are the brightest, which likely encouraged a detailed follow-up. However, it is improbable that they represent the only such bursts observed, and we should expect a much larger population. To better quantify this, we extend our analysis to the Swift bursts and utilise the fluence of GRB 230307A converted to a 15–150 keV equivalent fluence using the observed spectral parameters. At z=0.065, E_iso∼ 7 × 10^51 erg, and for GRB 211211A E_iso = 2 × 10^51 erg. As expected, low energy events dominate the low redshift GRB population. However, at z<0.5, there are 12 (out of 42) bursts with E_iso≳ 10^51 erg. This includes some further supernova-less GRBs, in particular GRB 060614 (E_iso = 9 × 10^50 erg), GRB 191019A (E_iso = 2.0 × 10^51 erg), and some bursts for which supernova searches have not been reported (e.g. GRB 150727A, GRB 061021, and the `ultra-long' GRB 130925A). This sets an upper limit on the number of bursts at low redshift, which may be associated with mergers. In practice, selection effects would support a scenario where mergers generate a larger fraction of these bursts. In particular, the afterglows of GRB 230307A and GRB 211211A appear to be faint, despite the bright prompt emission. Such afterglows are difficult to find and may evade detection. In these cases redshifts may only be obtained from host galaxies. The associations may not be obvious if the bursts are offset from host galaxies at moderate redshifts. Such follow-up may occur late after the burst, or optical afterglow non-detections may lead to a lack of optical/IR follow-up because of uncertainty regarding the optical brightness of the event or suggestions it may be optically dark because of host galaxy extinction. Finally, given the afterglow brightness issues, it is possible that the small fraction of bright GRBs without redshift measurements may arise from a similar channel. These observations would imply that between 30-70% bursts at z<0.5 and E_iso≳ 10^51 erg could arise from mergers, although it is likely less. A modest number of events at higher redshift is consistent with the observations, and would alleviate concerns regarding V/V_max for GRB 211211A and GRB 230307A. This fraction is surprisingly high given the strong evidence that long GRBs arise from broad-lined type Ic supernovae and short GRBs from compact object mergers. However, the dominant contributors to the long-GRB supernova connection occur at low energy, and belong to a population of low luminosity GRBs (LLGRBs) <cit.>. In a significant number of these, we may observe a energy source in the prompt emission separate from the highly relativistic jet seen in on-axis, energetic bursts. For example, the long-lived, soft nature of some bursts suggests a contribution from shock breakout or cocoon emission. If, for this reason, the luminosity function of collapsar GRBs is steeper at low luminosity than that of merger-GRBs, it is possible that at low luminosity the long GRB population is dominated by collapsars, while at high luminosity the contribution of mergers is significant. Such an interpretation is not without problems, given the star-forming nature of long-GRB hosts and their typically small offsets from their host galaxies. However, it is a logical investigation for future work. § MODELLING §.§ Light curve modelling In order to shed light onto the properties of the jet and, even more importantly, to separate the contribution of the kilonova from that of the jet afterglow in the UVOIR bands, we modelled the multi-wavelength light curves from radio to X-rays as a superposition of synchrotron emission from the forward shock driven by the jet into the interstellar medium (ISM), following <cit.>, and blackbody emission from the photophere of a kilonova, using the simple single-component model of <cit.>. The forward shock synchrotron emission model has eight parameters, namely the isotropic-equivalent kinetic energy in the jet E_K, its initial bulk Lorentz factor Γ_0, its half-opening angle θ_j, the ISM number density n, the fraction ξ_N of ISM electrons that undergo diffusive shock acceleration in the forward shock, the fraction ϵ_e of the shock downstream internal energy that is shared by such electrons, the slope p of the power law dN_e/dγ∝γ^-p that describes the Lorentz factor (as measured in the shock downstream comoving frame) distribution of the accelerated electrons as they leave the acceleration region, and the fraction ϵ_B of the shock downstream internal energy that is shared by a small-scale, turbulence-driven, random magnetic field. The shock hydrodynamics is computed from energy conservation and accounts for the lateral expansion of the shock <cit.>. The effective electron energy distribution is computed accounting for the cooling induced by synchrotron and synchrotron-self-Compton emission, including an approximate treatment of the Klein-Nishina suppression of the Thomson cross section <cit.>. In computing flux densities, the synchrotron surface brightness of the shock is integrated over equal-arrival-time surfaces to account for the effects of relativistic aberration and latitude-dependent retarded times on the spectral shape <cit.>. The kilonova model <cit.> assumes spherical ejecta expanding homologously, v=r/t, and featuring a power law density profile ρ(r,t) ∝ t^-3v^-δ between a minimum and a maximum velocity, v_ej≤ v ≤ v_ej,max. The density normalization is set by the total ejecta mass M_ej. In general, the model allows for the ejecta opacity (assumed grey) κ to be piecewise-constant within the profile, but here we assume a uniform opacity across the ejecta for simplicity. The model divides the ejecta into 100 small shells and computes the heating rate and thermalization efficiency within each. This allows for the derivation of the internal energy evolution in each shell and eventually the computation of the photospheric luminosity L_KN in the diffusion approximation. The fixed ejecta opacity also allows for the computation of the optical depth and hence for the identification of a photospheric radius, which then sets the effective temperature T_KN by the Stefan-Boltzmann law. In our modelling of GRB 230307A, we computed the flux density by simply assuming pure blackbody emission with the given luminosity and effective temperature at each given time. We fixed v_max=0.6 c and left M_ej, v_ej, κ and δ as free parameters. To carry out the model fitting, we defined an asymmetric Gaussian log-likelihood term for the i-th datapoint, which corresponds to an observation at time t_i and in a band whose central frequency is ν_i, as lnℒ_i = -1/2(F_ν,m(ν_i,t_i)-F_ν,obs,i)^2/σ_i^2 + f_sys^2F_ν,m^2 -ln[√(2π(σ_l,i^2+f_sys^2F_ν,m^2))+√(2π(σ_h,i^2+f_sys^2F_ν,m^2))], where F_ν,m(ν,t) is the flux density predicted by the model, F_ν,obs,i is the measured flux density, the one-sigma error reflects the potentially asymmetric error bars σ_i = {[ σ_l,i if F_ν,m(ν_i,t_i)≤ F_ν,obs,i; σ_h,i if F_ν,m(ν_i,t_i)> F_ν,obs,i; ]., and we introduced a fractional systematic error contribution f_sys, which we take as an additional nuisance parameter, to account for potential inter-calibration uncertainties between different instruments and for the fact that error bars typically only account for statistical uncertainties. For X-ray detections, we fit the integrated flux and the spectral index independently, with an analogous term for each (but with no systematic error contribution for the spectral index). Upper limits were treated simply by setting F_ν,obs,i equal to the reported upper limit, σ_h,i=F_ν,obs,i/10 and σ_l,i=10 F_ν,obs,i. The final log-likelihood was taken as the sum of these terms. In order to derive a posterior probability density on our 13-dimensional parameter space, we assumed the priors reported in Table <ref> and we sampled the posterior with a Markov Chain Monte Carlo approach using the python module <cit.>, which implements the Goodman and Weare <cit.> affine-invariant ensemple sampler. The medians and 90% credible intervals of the marginalised posteriors on each parameter obtained in this way are reported in Table <ref>. The posterior is visualised by means of corner plots in Figures <ref> (jet afterglow parameters), <ref> (kilonova parameters) and <ref> (all parameters). The left-hand panel in Figure <ref> shows the observed light curve data (markers) along with the best-fitting model (solid lines). Dashed lines single out the contribution of the kilonova. The right-hand panel in the same figure shows some selected spectra, showing in particular the good agreement of the first JWST epoch with the blackbody plus power law spectrum implied by our model at those times. While the best-fit model demonstrates a relatively good agreement with most of the measurements, some discrepancies stand out, most prominently with the 61.5 d JWST data and with the 28.5 d Chandra detection. The former is not too surprising, as the assumptions in the kilonova model (in particular that of blackbody photopsheric emission, which is particularly rough in such a nebular phase, and that of constant and uniform grey opacity, due to recombination of at least some species) are expected to break down at such late epochs. The latter is linked to the steepening (`jet break') apparent at around 2 days in the model X-ray light curve, which in turn is mainly driven by the need to not exceed the optical and near-infrared fluxes implied by observations at around one week and beyond. In absence of these constraints, the fit would have accommodated a larger jet half-opening angle, postponing the jet break and hence allowing for a better match with the best-fit Chandra flux. On the other hand, as noted in Methods, this flux is rather uncertain, with the low-end uncertainty possibly extending to fluxes lower by one order of magnitude or more, depending on the adopted prior in the spectral analysis (see Methods). Still, such a discrepancy might indicate the presence of additional X-ray emission that is not accounted for by the model, as has been seen previously in e.g. <cit.>. §.§ Spectral analysis modeling The JWST/NIRSpec spectrum taken on 5 April 2023 exhibits a red continuum component with emission line features. The most distinctive feature is a broad emission line at 2.15 microns (in the rest frame, assuming z=0.065). This may be a blend (visibly split in Figure <ref>) and a simultaneous fit of two Gaussians provides measured centroids of 20285±10Å and 22062±10Å. The line widths are both consistent at v_ FWHM=19100 kms^-1 (0.064c). This 2.1 micron feature is quite similar in strength and width to the 2.07 micron feature in AT2017gfo at 10.5 days after merger <cit.>. The AT2017gfo line also appears to be better fit as a blend of two features rather than a single transition, with line velocities of v_ FWHM=38900 kms^-1. While the average line centre is reasonably consistent between the two, the components inferred for AT2017gfo and the kilonova of GRB 230307A are each quite different. Reference <cit.> finds them at 20590Å and 21350Å and there is no consistent velocity shift that could be applied to match AT2017gfo with our JWST spectrum. Nevertheless, the similarity in their average line centroids, velocities and equivalent widths is striking, as demonstrated in Figure <ref>. With a Doppler broadening parameter of ≲ 0.1c, it is unlikely that the continuum component is formed as a result of the superposition of emission lines. Because kilonova radiation transfer at such late times is not yet fully understood, here we attempt to model the spectrum with the assumption that the emission consists of blackbody radiation from the photosphere and forbidden emission lines of heavy elements formed outside the photosphere. If the continuum is described with blackbody radiation, the temperature and photospheric velocity are ≈ 670 K and ≈ 0.08c, respectively. The continuum luminosity is estimated as ∼ 2× 10^39 erg/s in the NIRSpec band and ∼ 5× 10^39 erg/s if the blackbody emission extends to much longer wavelengths. Assuming this emission is entirely powered by radioactivity of r-process nuclei, these correspond to an ejecta mass of ∼ 0.03–0.07M_⊙ <cit.>. With the ejecta mass and velocity, the opacity is required to be ≳ 5 cm^2/g in order to keep the ejecta optically thick at 30 day. It is worth noting that such a high opacity in the mid-IR indicates that the inner part of the ejecta is lanthanide rich <cit.>. Forbidden emission lines in the infrared are expected to arise from fine structure transitions of low-lying energy levels of heavy elements. Most abundant ions are expected to produce the strongest lines. We attribute the strongest observed line at 2.15 microns to tellurium (Te) III from an M1 line list of heavy elements presented in <cit.>, where the line wavelengths are experimentally calibrated according to the NIST database <cit.>. Te belongs to the second r-process peak. With the M1 line list, we model kilonova emission line spectra under the assumption that photons from forbidden lines produced outside the photosphere freely escape from the ejecta. The collision strengths of Te III are taken from an R-matrix calculation <cit.> and those of other ions are obtained by using an atomic structure code HULLAC<cit.>. The abundance pattern is chosen to be the solar r-process but we separate “light” and “heavy” elements at an atomic mass of 85 and introduce a parameter, the abundance ratio of the two (see figure <ref>). The ionization fractions are fixed to be (Y^+1,Y^+2,Y^+3)=(0.2,0.5,0.3) motivated by the Te ionization evolution in kilonova ejecta <cit.>. The line shape is approximated by a Gaussian with a line broadening velocity of 0.08c, which is the same as the photospheric velocity. The mass in the line forming region is estimated by assuming that the observed line luminosity, 5× 10^38 erg s^-1, is locally generated by radioactivity of r-process nuclei, corresponding to ∼ 0.02M_⊙. Given the abundance pattern and ionization state, the mass of Te III in the line forming region is ≈ 8· 10^-4M_⊙. The electron temperature of the line forming region is then determined such that the total line luminosity agrees with the observed one. The estimated electron temperature is ∼ 3000 K, which is slightly higher than that derived from the pure neodymium nebular modeling <cit.>. This is because the cooling by tellurium ions is more efficient than neodymium. We find that [Te III] 2.10 μ m line is indeed the most outstanding emission line around 2 microns. Several weaker lines also contribute to the flux around 3–4 microns. There is another potential line feature around 4.5 microns in the NIRSpec spectrum. The location of this feature is consistent with [Se III] 4.55 μ m and [W III] 4.43 μ m as pointed out by <cit.> for the kilonova AT2017gfo. From the spectral modeling, we obtain the total ejecta mass of ∼ 0.05–0.1M_⊙, which agrees with the one obtained from the light curve modeling ∼ 0.1M_⊙. Here we show a brief estimate of the Te III mass from the observed line at 2.15 microns (^3P_0–^3P_1). The collisional excitation rate per Te III ion from the ground level (^3P_0) to the first exited level (^3P_1) is given by k_01 = 8.63· 10^-6n_e/√(T_e)Ω_01/g_0 e^-E_01/kT_e s^-1, where n_e and T_e are the thermal electron density and temperature, Ω_01≈ 5.8 is the collision strength <cit.>, E_01≈ 0.6 eV is the excitation energy, g_0 is the statistical weight of the ground level. Assuming that the ejecta mass in the line forming region is 0.02M_⊙ expanding with 0.08c and the ions are typically doubly ionised, we estimate n_e∼ 3· 10^5 cm^-3, and thus, the line emissivity per Te III ion is ϵ_10≈ 2.5· 10^-14(n_e/3· 10^5 cm^-3) erg/s, where T_e=3000 K is used. Combining the line emissivity with the observed line luminosity in 2.25± 0.23 μ m, L_ line≈ 3· 10^38 erg/s, we obtain M( Te III) ≈ 10^-3M_⊙(n_e/3· 10^5 cm^-3)^-1(L_ line/3· 10^38 erg/s). The mass estimated from the line is somewhat dependent on T_e and n_e. However, we emphasise that, with T_e≈3000 K and n_e≈ 3· 10^5 cm^-3, the line luminosity is consistent with the radioactive power in the line forming region. It is also interesting to note that the Te III mass of 10^-3M_⊙ is in good agreement with the one obtained based on the same line seen in AT2017gfo at 10.5 day <cit.>. While we conclude that the observed line feature at 2.1 microns is most likely attributed to Te III, it is important to note that there are caveats associated with our modeling. One obvious caveat is that the model does not include E1 lines. Lanthanides and actinides have E1 transitions between low-lying levels in the mid-IR <cit.>. Due to their lower abundances, these lines are expected to be weaker compared to the Te line if collisional excitation dominates the excitation processes. However, as we make an implicit assumption that their E1 lines contribute to the opacity in the mid-IR, they may produce P-Cygni like features, see, e.g., <cit.>. For example, Ce III has a strong line at 2.07 μ m with log gf=-1.67 <cit.>. We estimate that its line optical depth is ≲ 0.1 at 700 K with ∼ 0.05M_⊙ and ∼ 0.1c even if Ce is purely in Ce III. However, more careful analyses including non-LTE effects are needed to quantify it. Another caveat is that the opacity of lanthanides is expected to have some wavelength dependence. Including this effect may also affect the spectral modelings. § ALTERNATIVE PROGENITOR POSSIBILITIES Our interpretation of GRB 230307A provides a self-consistent model for the source in which the temporal and spectral evolution, as well as the source location, can be readily explained. The kilonova has marked similarities with AT2017gfo providing a robust indication of its origin, and we do not need to postulate new and unseen phenomena to explain it. However, it is also relevant to consider alternative possibilities. In particular, given the location of the galaxy at z=3.87, it is important to consider if the burst could originate at that redshift. §.§ GRB 230307A as a high redshift, highly energetic GRB The nearby galaxy H1 (F277W(AB)=27.5± 0.1 , r_proj = 0.3 arcsec) with a spectroscopic redshift of z=3.87 has a relatively low probability of occurring by chance (∼ 5-10%, see section <ref>). This galaxy has a comparable P_chance to G1 (the z=0.065 galaxy). The host-normalised offset for H1 is ∼ 2.5, which is large but not unprecedented for long GRBs <cit.>. However, assuming the late time light at the GRB position is all from the transient, it does not lie on the stellar field of this galaxy, which is unusual, for example, in the samples of <cit.>, there is only one (of >100) sub-arcsecond localised GRB not on the stellar field of its host. At z=3.87, the inferred isotropic energy release and luminosity of GRB 230307A would be E_iso = 1.2 × 10^56 erg and L_iso = 1.7 × 10^56 erg s^-1 (using a 64 ms peak flux). This is approximately an order of magnitude more energetic and two orders of magnitude more luminous than any other previously identified GRB <cit.>. If at z=3.87, we can have some confidence that GRB 230307A would be the most energetic burst ever detected by Swift or Fermi, including those without redshift or even afterglow identifications. In Table <ref>, we tabulate the most fluent GRBs observed by Fermi. Most of these have either redshifts or optical detections, which constrain z<6 via the detection of the source in the optical band. This leads to a set of measured or maximum E_iso values. For events without any redshift information, we can place a conservative upper redshift limit of z = 16. No GRBs detected by Fermi without a redshift can have energy over 10^56 erg unless they lie beyond z ∼ 20. Hence, GRB 230307A is sufficiently rare, if at z=3.87, that events like it occur less frequently than once per decade across the Universe (i.e. no more than one in the combined lifetimes of Swift and Fermi). The energetics of the burst at this redshift lie would lie far beyond those of the general GRB population and beyond those suggested as the upper limit for GRB energetics <cit.>. The only population of core-collapse GRBs whose energetics have been suggested to approach this value are those from first-generation population III stars <cit.>. It is not expected that such stars should exist at z ∼ 4. However, while a pop-III origin may alleviate energy concerns, the properties of the GRB and its optical/IR counterpart do not resemble the predictions for pop-III stars. In particular, pop III GRBs are suggested to have particularly long durations given the mass and radii of their progenitors <cit.>, and so require extremely long durations of engine activity to enable jet breakout. However, the ∼ 35 s duration of GRB 230307A and its rapid variability do not readily fit this expectation. If one ascribes the GRB to a stellar collapse event, considering the afterglow's properties and associated supernovae is also relevant. Firstly, at 28.5 days, the JWST spectral observations are inconsistent synchrotron emission, suggesting that the counterpart must be dominated by another source in the mid-IR and the earlier K-band points. This excess, which in our preferred model is explained by a kilonova, would have to be due to the supernova or shock breakout if at z=3.87. The K-band (rest frame B-band) light would reach a peak of M_B(AB) < -23.5 on a timeframe of <2-days (rest-frame) before decaying at a rate of > 1 mag day^-1 for the next four days, or as a power-law decay, a rate of approximately t^-4. This appears too rapid for radioactively powered transients, at least based on the sample to date (note that at z=3.87, the timescales are a factor of ∼ 5 faster than in the z=0.065 scenario due to cosmological time dilation). The most likely option for such emission would be shock-breakout, which may begin blue but rapidly cool. There are simulations for the shock breakout associated with pop-III supernovae, which show an early peak <cit.>. However, this emission peaks in the UV to soft X-ray regime and at luminosities below that seen in GRB 230307A. Indeed, taking 28.5 days as a baseline; for a plausible maximum Pop III radius (e.g. 2000 R_⊙, <cit.>) and a luminosity of L_bol∼ 10^43 erg s^-1 the inferred temperature is T ∼ 300,000 K. This is incompatible with the spectral shape seen in the counterpart to GRB 230307A which peaks at >4.5 microns (T< 3000K at z=3.87). Dust or metal line blanketing could alleviate this discrepancy to some degree, but it would be extreme to explain the observations. It would also come at the cost of an even higher intrinsic luminosity. Conversely, the radius at which the luminosity and temperature would be consistent is extremely large (∼ 0.3 pc) and indeed would require super-luminal expansion to reach from a single explosion within the time since burst. These constraints become even more extreme for the K-band observations at 11.5 days, where the luminosity is >50 times higher. However, we lack detailed information regarding the spectral shape at this time. We can conclude that a thermal transient launched at the time of GRB 230307A cannot explain the observed source at z=3.87. We should consider if GRB 230307A could be related to an explosion which bears little to no similarity to long-GRB progenitors. Given the inferred energetics, this is not an unreasonable proposition. However, the emission is too bright and too fast for, for example, the fast blue optical transients (e.g. <cit.>), the fastest of which have half-times of ∼ 4 days <cit.>. A further alternative may be a relativistic tidal disruption event. This would face significant challenges with the rapid variability timescales seen in the prompt emission and the non-nuclearity of the source within the galaxy at z=3.87. Putting aside these concerns, the peak optical/IR luminosity is comparable to AT2022cmc <cit.>, but the evolution is too rapid and the dynamic range too large. Finally, it is possible that the red excess seen at later times is not directly related to the progenitor or the transient but is a result of the re-processing of the GRB radiation by material within the host galaxy. In particular, for GRB 211211A <cit.> suggest that an alternative explanation for the emission could be the heating of dust. However, this model also encounters significant issues at z = 3.87. In particular, the observed K-band excess is a rest-frame B-band excess, much bluer than expected for dust heating. If this represented the peak of the thermal spectrum, it would be above the sublimation temperature of the dust. Alternatively, if it were the blue tail of a much cooler black body, the luminosity would be extremely high. A final challenge to the high-z scenario is that the afterglow is detected in the UVOT-white and ground-based g-bands. These observations all have substantial sensitivity blue-ward of Lyα at z=3.87. A typical column from the intergalactic medium should attenuate ∼ 50% of the light in these bands, inconsistent with observations. Indeed, for a typical β = 1 spectrum, we expect to observe white-i = 2.8 and g-i=1.9, approximately 3 and 4 σ away from the observed colours. There is significant variation in the absorption strength as a function of the line of sight, so a low (or near zero) absorption column would alleviate this tension. The sample of <cit.> implies that at z ∼ 4, perhaps 10% of galaxies have such low absorption sight lines. Hence, while the proximate galaxy could indicate a high redshift, there are few other indications in the transient properties that would support this interpretation. In particular, neither standard thermal nor non-thermal emission can explain the observed counterpart properties. If the burst does arise from z=3.87, it requires a new kind of explosion, unlike any seen until now. In practice, such explosions could be extremely rare: the volumetric rate of GRBs with E_iso > 10^56 erg is minimal, but postulating them is unnecessary when a robust, physically motivated explanation can be obtained for a lower redshift solution. §.§ Other cosmological scenarios We should also consider a further option which is that GRB 230307A does not reside at either z=0.065 or z=3.87 but is a chance super-position with both galaxies. In this scenario, the actual host is undetected or is one of the other galaxies within the field. The absence of direct redshift measurements makes placing constraints on this scenario challenging. However, we can use the non-detection of the late-time JWST magnitudes to limit the brightness of a supernova component at any redshift. To quantify the exclusion of “normal" long GRBs at intermediate redshift we utilize model light curves for SN 1998bw from MOSFiT <cit.> calculated at a range of redshift from 0.05 < z < 4.0 (we take z =4 as an upper limit for the redshift the GRB based on the observed g-band detections together with the detection of continuum emission to 5300Å <cit.> and similar faint trace seen to ∼5100Åin our X-shooter spectrum). At each redshift, we compare our observed JWST photometry in each band to the model predictions at that time and report the most constraining limit (e.g. the lowest ratio of F_obs / F_98bw). This is shown in Figure <ref>. At all plausible redshifts, any supernova must be at least a factor ∼ 3 fainter than SN 1998bw at similar times. For any redshift where the burst energetics fall within the range seen in the bulk GRB population (z<1.2), any supernova must be a factor > 100 fainter than SN 1998bw. Hence, there is no route in which GRB 230307A can arise from a classical long GRB (E_iso < 10^55 erg with an associated broad-lined Ic supernova. The strength of this rejection is predominantly based on the faintness of the source in the bluer bands, whereas at the first epoch, the redder bands are substantially brighter than SN 1998bw at z=3.87. If the burst lies at an intermediate redshift, dust models may become more appealing. In particular, for a low to moderate redshift, the luminosty and timescales may be a suitable match (e.g. for GRB 211211A <cit.> find plausible explanations at z ∼ 0.5). However, in this case, the lack of a supernova to extremely deep limits would be surprising, as would the non-detection of the host galaxy. §.§ Galactic objects In the absence of a robust absorption redshift, it is also necessary to consider if GRB 230307A could arise from a Galactic system. The very faint magnitudes and extreme red colours observed at late times can effectively rule out X-ray binary outbursts. For example, with an M-dwarf companion (absolute magnitude 9), the distance to the source would be ∼ 100 kpc, and larger for any more massive star, while the late-time colours of the source are not stellar. In practice, given that the source is transient, and we may also expect some contribution from an accretion disc, the overall properties cannot be remedied within the accreting binary framework. The inferred energetics for Galactic systems are E_iso∼ 5 × 10^43 (d/10 kpc)^2. This energetic output is within the bounds of giant outbursts from magnetars. For example, the giant flare of SGR 1806-20 had an inferred isotropic energy of E_iso∼ 2 × 10^46 erg (see <cit.>, although subsequent downward revisions in its distance lower this somewhat <cit.>). However, GRB 230307A does not appear to meet the requirements of a magnetar. In particular, it is at high Galactic latitude (-36 degrees) and far from any plausible star formation that could give rise to a young neutron star. Furthermore, the emission in all magnetar outbursts is dominated by a very short pulse followed by decaying emission in which the pulse period of the neutron star is visible. This is not the case for GRB 230307A. GRBs have previously been suggested to arise from the tidal disruption and accretion of rocky material onto a neutron star <cit.>, and such events are seen in the case of white dwarfs <cit.>. However, for accretion onto a neutron star, we would usually expect to observe a relatively soft outburst (e.g. in the model of <cit.> the temperature is ∼ 10 keV). The spectrum we observe for GRB 230307A consists of evolving synchrotron emission which does not contain a thermal component and is inconsistent with directly observing heating accreting material. Indeed, even for near direct accretion, it is not clear how such a spectrum would be formed in an accreting neutron star scenario. Indeed, in this case, the evolution to very low temperatures on a timescale of ∼ 60 days would also not be natural. Hence, we conclude that no known Galactic systems could explain the observed properties of GRB 230307A. §.§ GRB 230307A as a white-dwarf – neutron star merger A final alternative is that GRB 230307A is related to the merger of a white dwarf and a neutron star. Although this is still a “compact object merger", such mergers are very different from those of neutron stars with another neutron star or a black hole. In particular, simulations show that no r-process material is produced <cit.>, and so we should not expect the very red emission. Although there are suggestions that GRB 211211A could have been produced by a WD-NS merger <cit.>, it is unclear if these could readily explain the detailed spectrophotometric evolution of GRB 230307A. White dwarf neutron star mergers are appealing, because the long-duration of the gamma-ray emission could suggest a less compact remnant merger event. The wider separation of the binary at the disruption of the white dwarf produces disk accretions times from 100 to 1000s, matching the long duration of these bursts <cit.>. However, we expect the accretion rate in the mergers to be low, producing less-powerful, and hence, less luminous GRBs. Current WD/NS merger simulations predict a range of light curves that span the emission from GRB 230307A. A Ca feature does exist that is close to the observed line feature, but the models are too blue to explain the shape of the spectra <cit.>. This is because WD/NS mergers do not produce elements much heavier than the iron peak elements. As we have noticed in matching kilonova models, these elements do not have strong lines beyond ∼ 20,000 Å. The subsequent emission above these wavelengths is very weak in these models. In addition, WD/NS mergers are expected to have fairly week kicks to ensure that the binary remains bound and these mergers are expected to have much lower offsets than neutron star mergers. However, the mass of the best-candidate host galaxy is sufficiently low that the observed offset for this burst can be attained <cit.>. § AUTHOR CONTRIBUTION STATEMENTS AJL led the project, including the location of the afterglow and kilonova and the JWST observations. BPG first identified the source as a likely compact object merger, was co-PI of the Chandra observations, and contributed to analysis and writing. OS contributed to afterglow and kilonova modelling and led the writing of these sections. MB was involved in kilonova modelling, EB contributed to interpretation, placing the burst in context and high energy properties. KH was involved in kilonova spectral modelling and identified the 2.15 micron feature. LI reduced VLT/MUSE and X-shooter observations and led the host analysis. GPL contributed to afterglow and kilonova modelling. ARE analysed the Chandra observations, BS reduced and analysed VLT observations. NS contributed to afterglow and kilonova modelling. SS was responsible for placing the burst afterglow in context and demonstrating its faintness. NRT contributed to analysis, interpretation and writing. KA was involved in the ULTRACAM observations and interpretation. GA led the ATCA observations. GB reduced the JWST NIRCAM data. LC processed and analysed the MUSE observations. VSD is the ULTRACAM PI. JPUF contributed to the interpretation. WF was the PI on the Chandra observations and contributed to discussion. CF contributed to the theoretical interpretation. NG was involved in host analysis. KEH, GP, AR, SDV, SC, PDA, DH, MDP, CCT, AdUP and DA contributed to ESO observations and discussion. DW contributed to spectral and progenitor modelling and discussion. MJD, PK, SP, JM, SGP, IP, DIS contributed to the ULTRACAM observations. GL investigated potential similarities with other transients. AT, PAE, BS and JAK contributed to the Swift observations. MF extracted and flux-calibrated the TESS light curve. SJS analysed the JWST spectral lines, and contributed to interpretation and writing. HFS performed the BPASS-hoki-ppxf fits to the integrated MUSE flux and contributed the associate figure and text. All authors contributed to manuscript preparation through contributions to concept development, discussion and text. § ACKNOWLEDGEMENTS We dedicate this paper to David Alexander Kann, who passed on March 10. The final messages he sent were regarding follow-up of GRB 230307A, and we hope it would satisfy his curiosity to know the final conclusions. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #4434 and 4445. Support for Program numbers 4434 and 4445 was provided through grants from the STScI under NASA contract NAS5- 03127. This paper is partly based on observations collected at the European Southern Observatory under ESO programme 110.24CF (PI Tanvir), and on observations obtained at the international Gemini Observatory (program IDs GS-2023A-DD-106), a program of NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). Processed using the Gemini package and (Data Reduction for Astronomy from Gemini Observatory North and South). AJL, DBM and NRT were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 725246). N. Sarin is supported by a Nordita fellowship. Nordita is supported in part by NordForsk. B. Metzger is supported in part by the NSF (grant AST-2002577). J.H. and D.L. were supported by a VILLUM FONDEN Investigator grant (project number 16599). G.P.L. is supported by a Royal Society Dorothy Hodgkin Fellowship (grant Nos. DHF-R1-221175 and DHF-ERE-221005). G.L. was supported by a research grant (19054) from VILLUM FONDEN. K. H. is supported by JST FOREST Program (JPMJFR2136) and the JSPS Grant-in-Aid for Scientific Research (20H05639, 20H00158, 23H01169, 20K14513). KEH acknowledges support from the Carlsberg Foundation Reintegration Fellowship Grant CF21-0103. S. Schulze acknowledges support from the G.R.E.A.T. research environment, funded by Vetenskapsrådet, the Swedish Research Council, project number 2016-06012. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. JPUF is supported by the Independent Research Fund Denmark (DFF–4090-00079) and thanks the Carlsberg Foundation for support. SJS acknowledges funding from STFC Grant ST/X006506/1 and ST/T000198/1. VSD and ULTRACAM are funded by STFC grant ST/V000853/1 AAB acknowledges funding from the UK Space Agency. MN is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 948381) and by UK Space Agency Grant No. ST/Y000692/1. H.F.S is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program. DS acknowledges funding from STFC grants ST/T000406/1, ST/T003103/1, ST/X001121/1. MER acknowledges support from the research programme Athena with project number 184.034.002, which is financed by the Dutch Research Council (NWO). POB acknowledges funding from STFC grant ST/W000857/1. D.K.G acknowledges support from the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004. § DATA AVAILABILITY JWST data are directly available from the MAST archive. Chandra and Swift data are also in the public domain. ESO and Gemini data are stored in their respective archives and will be available to all once the proprietary period expires. Data can be obtained from the corresponding author between the date of publication and the end of the proprietary period. This research has made use of Fermi data which are publicly available and can be obtained through the High Energy Astrophysics Science Archive Research Center (HEASARC) website at <https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html> § CODE AVAILABILITY Much analysis for this paper has been undertaken with publically available codes and the details required to reproduce the analysis are contained within the manuscript.
http://arxiv.org/abs/2307.02831v2
20230706075351
Joint moments of higher order derivatives of CUE characteristic polynomials II: Structures, recursive relations, and applications
[ "Jonathan P. Keating", "Fei Wei" ]
math-ph
[ "math-ph", "math.MP", "math.NT" ]
𝒜 [ 0 a ḇ ç C e f x y z s q g h G 𝕌 𝔼 𝒮 T ]
http://arxiv.org/abs/2307.00361v1
20230701151800
A Comparative Study of Machine Learning Algorithms for Anomaly Detection in Industrial Environments: Performance and Environmental Impact
[ "Álvaro Huertas-García", "Carlos Martí-González", "Rubén García Maezo", "Alejandro Echeverría Rey" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few LeavesThe research leading to these results has received funding from the Research Council of Norway via the projects (PCPC) (grant no. 274526) and BWCA (grant no. 314528). Emmanuel Sam10000-0001-7756-0901 Benjamin Bergougnoux20000-0002-6270-3663 Petr A. Golovach 10000-0002-2619-2990 Nello Blaser 10000-0001-9489-1657 ======================================================================================================================================================================================================================================================================= In the context of Industry 4.0, the use of artificial intelligence (AI) and machine learning for anomaly detection is being hampered by high computational requirements and associated environmental effects. This study seeks to address the demands of high-performance machine learning models with environmental sustainability, contributing to the emerging discourse on 'Green AI.' An extensive variety of machine learning algorithms, coupled with various Multilayer Perceptron (MLP) configurations, were meticulously evaluated. Our investigation encapsulated a comprehensive suite of evaluation metrics, comprising Accuracy, Area Under the Curve (AUC), Recall, Precision, F1 Score, Kappa Statistic, Matthews Correlation Coefficient (MCC), and F1 Macro. Simultaneously, the environmental footprint of these models was gauged through considerations of time duration, CO2 equivalent, and energy consumption during the training, cross-validation, and inference phases. Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance. However, superior outcomes were obtained with optimised MLP configurations, albeit with a commensurate increase in resource consumption. The study incorporated a multi-objective optimisation approach, invoking Pareto optimality principles, to highlight the trade-offs between a model's performance and its environmental impact. The insights derived underscore the imperative of striking a balance between model performance, complexity, and environmental implications, thus offering valuable directions for future work in the development of environmentally conscious machine learning models for industrial applications. § INTRODUCTION The ongoing digital transformation has been revolutionizing various sectors of the economy, including the industrial sector. This transformation, often referred to as Industry 4.0, has engendered increasingly dynamic, interconnected, and inherently complex manufacturing environments <cit.>. The key to this transformation is the generation of large volumes of data collected through numerous sensors embedded in industrial processes. Data collected from these sources, when used appropriately, can lead to remarkable improvements in process monitoring, optimization, equipment integrity, and worker safety, while simultaneously reducing operational costs <cit.>. However, the sheer volume and complexity of the data produced lead to significant challenges in identifying unusual patterns or anomalies, which could potentially signal substantial problems or inefficiencies <cit.>. Machine learning, a subset of artificial intelligence (AI), has demonstrated its capability to effectively detect such anomalies, thus offering the potential for automated and intelligent anomaly detection systems <cit.>. Yet, the broad implementation of AI and machine learning in real manufacturing environments still faces significant hurdles, limiting its transition beyond experimental pilot stages <cit.>. One of these challenges, and the focus of this study, relates to the environmental impact of AI operations. The computational requirements of machine learning can be substantial, leading to significant energy consumption and consequent environmental implications <cit.>. This has given rise to the field of `Green AI', which emphasizes developing AI solutions that are not only effective but also environmentally friendly. In light of this, it is crucial to consider the environmental impact of the AI systems developed to manage and optimize industrial operations alongside the environmental footprint of the industrial operations themselves. As industries worldwide strive to reduce their carbon footprint, it becomes increasingly important to develop AI-driven strategies that are both efficient and environmentally conscious. To achieve this primary goal, specific objectives are addressed: * Processing and extraction of useful data for the formulation of anomaly detection criteria, validated in collaboration with subject-matter experts. * Use of these criteria for the generation of a labeled dataset that allows the development of supervised machine learning models. * Development and evaluation of machine learning models for anomaly detection and classification. * To explore the trade-offs and synergies between algorithmic performance and environmental impact, contributing to the broader discourse on Green AI and its role in sustainable industrial practices. * To provide insights into how machine learning can be leveraged responsibly and energy-efficiently in the industrial sector. By addressing these objectives, this study aims to bridge the gap between the field of AI and the manufacturing industry, facilitating the transition towards Industry 4.0 supported by AI <cit.>. The findings of this study are expected to assist researchers and manufacturers in understanding the requirements and steps necessary for this transition, as well as the challenges that may arise during this process. § RELATED WORKS The recent increase in the utilization and development of Artificial Intelligence (AI) has come with an increased understanding of the environmental footprint these technologies leave behind <cit.>. Historically, AI research and development has been predominately focused on enhancing accuracy and performance, often overlooking energy efficiency <cit.>. However, the scenario is now changing, with the realization that energy efficiency is vital not only for environmental sustainability but also for AI technologies scalability and practical implementation <cit.>. In fact, the computations required for AI research have been doubling every few months, resulting in a staggering 300,000x increase from 2012 to 2018 <cit.>. This paradigm shift has given rise to the concept of Green AI, an initiative that advocates for the development of AI technologies that are environmentally friendly and sustainable <cit.>. In this context, several strategies have been proposed to make AI research and development more energy-efficient. A pragmatic solution to this issue has been proposed by Schwartz et al. <cit.>—introducing efficiency as an evaluation criterion for research, alongside accuracy and other similar measures. By including the financial cost or "price tag" of developing, training, and running models, researchers can establish baselines for investigating increasingly efficient methods. Patterson et. al. <cit.> projected that the total carbon emissions from AI model training could decline by 2030 if the entire field embraced best practices, proposing a set of practices that could potentially reduce CO2 emissions by up to a thousand times. Firstly, utilizing the most efficient processors hosted in the most environmentally-friendly datacenters, often available in the cloud. Secondly, developing more efficient models by harnessing sparsity or integrating retrieval into a smaller model. Thirdly, encouraging transparency by disclosing energy consumption and carbon footprint to stimulate competition based on parameters beyond just model quality. Lastly, employing renewable energy sources to power AI training whenever feasible. Another noteworthy approach centers on conducting modifications on datasets. A recent exploratory study <cit.> discovered that by altering datasets, energy consumption could be substantially reduced, in some cases by up to 92.16%, often with little or no decline in model accuracy. This strategy suggests that careful and thoughtful data preprocessing and management can play a crucial role in promoting energy efficiency in machine learning. The profiling of energy consumption for inference tasks is another viable strategy for enhancing energy efficiency. An empirical model has been developed to estimate the energy consumption of specific inference tasks on edge computing devices <cit.>. This model can guide the search for efficient neural network architectures, serve as a heuristic in neural network pruning, or assist in evaluating and comparing the energy performance of various deep neural network architectures. Consider the case of Large Language Models (LLMs), such as GPT-3 <cit.>, which have enabled breakthroughs in Natural Language Processing (NLP), however, they also present a significant computational and energy challenge. As a result of this scenario, model quantization has also been explored, which reduces both the memory footprint and the computational resources required and can contribute to lower energy consumption during model inference <cit.>. Lastly, comparative analyses of machine learning models also hold promise for improving energy efficiency. For instance, a study <cit.> that compared the use of feedforward neural networks, random forests, and recurrent neural networks for predicting the energy data of a chiller system found that best practices can enhance model performances. Despite the recent surge in Green AI, there remain substantial gaps in the literature. The current discourse predominantly focuses on Deep Learning models and emerging trends like model quantization, often neglecting the potential of simpler, less resource-intensive tools, like traditional Machine Learning algorithms <cit.>. This oversight is particularly evident in the context of anomaly detection in industrial environments, an area that stands to benefit significantly from energy-efficient algorithms. Furthermore, the environmental impact of various Machine Learning and Deep Learning algorithms in such settings remains underexplored. This study aims to address these gaps by presenting a comparative analysis of different Machine Learning, Deep Learning, and quantized versions of these algorithms. We evaluate these algorithms not only on performance metrics but also on their environmental footprint. This approach extends the principles of Green AI into a practical industrial scenario, specifically in the context of anomaly detection in an environmental industry. By doing so, we aim to highlight more sustainable and efficient methodologies that could revolutionize production practices <cit.>. In conclusion, the importance of energy efficiency in machine learning is now being recognized, and several strategies are being explored to reduce its energy consumption and environmental impact. By adopting these strategies, the machine learning field can align more closely with the principles of Green AI, promoting both environmental sustainability and practical scalability. § METHODOLOGY §.§ Data This research aims to detect and predict anomalies within an industrial milling machine using a dataset that has been carefully curated and labelled under the supervision of experts. The dataset contains instances representing 30-second windows of measurement from sensors monitoring temperature, vibrations, and current. In total, 308,772 instances were collected and presented as a three-class imbalanced problem resulting from meticulous criteria establishment and label propagation. The instances are classified according to whether they represent a non-anomalous state (99.86%), a single sensor anomaly (0.01%), or a multiple sensor anomaly (0.13%). As part of the data engineering phase, measurements were captured every second and grouped into 30-second intervals. These grouped data points were then transformed into a new set of features using descriptive statistical methods. These methods include calculations of mean, maximum, minimum, kurtosis, skewness, and the number of current peaks. This process effectively expanded the initial 7-feature dataset into one comprising 38 features, providing a more nuanced view of the machinery's functioning over each time interval. Any missing values in numerical data were managed by mean imputation, ensuring valuable information preservation. Due to multicollinearity, we implemented a threshold of 0.9 to prevent potential negative impacts on our machine learning models. Features with correlations above this threshold were removed, which primarily impacted current and vibration sensor readings. Our feature set was reduced from 38 to 25, resulting in a more lean and efficient dataset for modeling. During the development phase of the machine learning model, we focused on the performance and reproducibility of the model, maintaining a random seed value of 1794 throughout all experiments. In order to prevent overfitting and underfitting, we used the stratified Kfold method with ten folds for cross-validation. Our data was divided so that 70% was utilized for training and validation, with the remaining 30% reserved as a holdout test set for final model evaluation. §.§ Machine Learning Algorithms In our pursuit to decipher anomaly detection in the context of Green AI, we have sought to include a broad spectrum of machine learning algorithms, each bringing a unique perspective and capability to our investigation. The selected models span across various categories, namely linear models, non-linear models, single tree-based methods, ensemble decision trees, boosting decision trees, and deep learning models. This range allows us to glean a more holistic understanding of the data and identify different types of anomalies efficiently <cit.>. Linear models we've implemented are Logistic Regression, Ridge Classifier, Naive Bayes, Linear Discriminant Analysis (LDA), and a Support Vector Machine (SVM) with a linear kernel. These models offer the advantages of simplicity and interpretability, laying a strong foundation for understanding the basic structure and patterns within our data <cit.>. On the non-linear front, we employ Quadratic Discriminant Analysis (QDA) and K Nearest Neighbors (KNN). These models are equipped to capture intricate, non-linear relationships in our data, thus offering the opportunity to identify less obvious but potentially significant patterns that linear models may not uncover <cit.>. As for tree-based models, we've enlisted a Decision Tree classifier. Decision trees are particularly useful for their straightforward interpretability and ability to handle both categorical and numerical data effectively <cit.>. Further, we leverage the power of ensemble decision trees, namely Random Forest and Extra Trees Classifier. These models bring together multiple decision trees to make more robust and accurate predictions. Their combined strength makes them valuable when dealing with complex and imbalanced datasets, as is often the case in anomaly detection <cit.>. In the boosting decision tree category, we have selected Gradient Boosting, AdaBoost Classifier, XGBoost Classifier, and Light Gradient Boosting Machine. These algorithms iteratively refine their decision-making process to enhance performance, making them well-suited for demanding tasks like anomaly detection <cit.>. Moreover, we have included the deep learning model, Multi-Layer Perceptron (MLP), to our repertoire of algorithms. MLP is a type of artificial neural network that consists of multiple layers of interconnected nodes or neurons <cit.>. Each neuron applies a non-linear activation function to the weighted sum of its inputs and passes the result to the next layer. MLPs can learn complex relationships between input features and output labels by adjusting the weights and biases during training, using the backpropagation algorithm. This makes them a highly versatile tool for anomaly detection <cit.>. We have explored different MLP configurations with varying depth and size of hidden linear layers and have run the models on both GPU and CPU for comparison. In alignment with our Green AI goal, we have studied the application of quantization on the MLP model, contrasting full float 32 precision with integer 8 (int8) precision <cit.>. Below is a more detailed breakdown of each machine learning model employed in our study, emphasizing their unique contributions to anomaly detection and potential significance within Green AI. The specifics of the experimental setup and individual model configurations will be provided in the forthcoming "Experimental Setup" section <ref>. §.§ Computational Resources As part of our commitment to Green AI principles, we strive to provide transparency and clarity regarding the computational resources used in this study. In addition to advocating energy-efficient AI solutions, these principles also emphasize the importance of reporting the resources necessary to reproduce the results of a study. The information is divided into three categories: Software, Hardware, and Optimization Strategies. The following software is used in our computational setup: * Operating System: The experiments were conducted on Linux, specifically version 5.15.107+ with an x86_64 architecture and glibc version 2.31. * Python Version 3.10.11 <cit.> was the primary programming language used to conduct the research. * Scikit-learn <cit.>: Version 1.2.2. This Python library was employed to load the algorithms and implement the machine learning models considered in the study. * PyCaret <cit.>: Version 3.0.2. This open-source, machine learning library in Python used to compare and train different models from scikit-learn efficiently. * CodeCarbon[https://mlco2.github.io/codecarbon/https://mlco2.github.io/codecarbon/]: Version 2.2.1. The Python package used for tracking emissions and energy consumption during model execution adheres to Green AI principles. * PyTorch <cit.> (version 2.0.1) and PyTorch Lightning [https://zenodo.org/record/3828935https://zenodo.org/record/3828935] (version 2.0.2), an extension of PyTorch, was employed to promote cleaner and more modular code for the MLP deep learning models, particularly for applying dynamic quantization. The following hardware components are equally important to the operation of our system: * CPU: Our setup was equipped with an Intel(R) Xeon(R) CPU @ 2.20GHz, featuring 2 cores, thereby allowing simultaneous execution of multiple threads or processes. * RAM: We utilized a total of 12GB of RAM, a crucial resource for handling large datasets, training machine learning models, and sustaining multiple tasks or services in memory. * GPU: Our system was outfitted with a Tesla T4 GPU. Given its ability to concurrently manage thousands of threads, the GPU was employed primarily for tasks necessitating high parallelism, such as the training of deep learning models. All machine learning models were executed on the CPU, with the exception of the Multi-Layer Perceptron (MLP). In our study, the MLP was unique in its use of both the CPU and GPU. This utilization of the GPU allowed us to explore its potential for accelerating deep learning tasks, demonstrating the resource efficiencies achievable in this field. Such details underscore Green AI's transparency and provide a comprehensive picture of the resources necessary to replicate our study. §.§ Experimental Setup This section provides a detailed overview of the experimental parameters and procedures employed in our study, encompassing both traditional machine learning models and deep learning counterparts. The objective is to maintain rigorous and comprehensive experimentation, upholding the principles of reproducibility and Green AI. Our experimental setup includes a variety of machine learning models, as well as the Multi-Layer Perceptron (MLP), a deep learning model. We adhered to the default parameters for each model as specified by the Scikit-learn library, and we set the seed for random state to 1794 to ensure consistency and reproducibility. Part of our investigation involved studying different configurations of the MLP, exploring both the width and depth of neural networks under the principles of Green AI <cit.>. The MLP's configuration includes only linear layers with no batch normalization or dropout layers. We used the rectified linear unit (ReLU) as the activation function. The various configurations of MLP we explored are detailed in Tables <ref>, <ref>, <ref> and <ref>. For instance, '200,100' refers to two hidden layers with 200 and 100 neurons respectively. Our assessment matrix for model performance and environmental impact comprised CO2 equivalent (CO2eq) emissions, total energy consumption by CPU, GPU, and RAM (expressed in kW), the F1 Macro score, and the elapsed time during both training and inference stages. This approach ensures that we maintain a balance between resource efficiency and model performance, upholding the principles of Green AI. To ensure the robustness and reliability of our results, each experiment was conducted five times for each model. Afterwards, we calculated the average values and standard deviations for CO_2eq emissions, total energy consumption, and elapsed time. These measurements provide insights into the environmental impact of each model and its variability. The MLP model with best performance was also dynamically quantized using PyTorch and PyTorch Lightning to reduce model size and improve computation speed. Dynamic quantization involves determining the scale factor for activations based on the data range observed at runtime. This method ensures that as much signal as possible from each observed dataset is preserved. The model parameters are converted to int8 form in advance. Arithmetic operations in the quantized model are conducted using vectorized int8 instructions. Accumulation is typically done in int16 or int32 to avoid overflow. These higher precision values are then scaled back to int8 if the next layer is quantized or converted to fp32 for output. Through detailing these experimental setups and findings, we aim to further the discussion on sustainable and environmentally friendly AI research. §.§ Multi-Objective Comparison Our study aimed to identify an optimal balance between the performance (measured by F1 Macro score) and the environmental impact (quantified through computation time, CO2 equivalent, and energy consumption) of different machine learning models. To achieve this, we employed a multi-objective optimization approach. This approach demanded a conversion of the F1 Macro score maximization problem into a minimization problem, achieved by considering its reciprocal. This conversion allowed the application of Pareto optimality principles, which facilitated the computation of the Pareto front - a representation of non-dominated solutions. Each solution on this front represents an optimal trade-off between a model's performance and its environmental impact. We initially undertook a simplified, two-dimensional optimization problem, focusing on the trade-off between the CO2 equivalent and F1 Macro performance of the models. This approach prioritized the CO2 equivalent as the main environmental metric due to its direct relationship to global warming. Energy consumption, while important, was excluded in this analysis because it varies greatly depending on whether the energy source is renewable or non-renewable. Similarly, time consumption was not a key priority in our analysis, despite being an essential efficiency metric. This decision aligns with the principles of Green AI, which emphasize reducing environmental impact, even if it means longer computation times. The two-dimensional optimization offered a clear and simplified view of the relationship between model performance and environmental impact, but we recognized the need for a more comprehensive analysis. Therefore, we conducted a second multi-objective optimization scenario to account for the multi-faceted nature of the problem. In this more complex scenario, all variables (computation time, energy consumption, CO2 equivalent, and performance) were considered simultaneously. By exploring the interactions of these variables together, we provided a comprehensive understanding of their combined effects and trade-offs in the context of machine learning model optimization. §.§ Limitations Despite our comprehensive analysis, certain limitations of our study should be acknowledged. Some machine learning algorithms values were not explored due to computational constraints. While we aimed to cover a wide range of machine learning algorithms suitable for industrial environments, it is possible that some algorithms were inadvertently overlooked. Furthermore, in our experimentation, we opted to use default parameter and hyperparameter values instead of performing an exhaustive search for optimal settings. Although this approach provides a baseline for comparison, it may not capture the full potential of each algorithm. Future studies could consider exploring different parameter configurations to further enhance the performance of the algorithms. Additionally, certain elements such as dropout and batch normalization layers in MLP configurations were omitted from our analysis to prioritize computational efficiency and focus on the impact of depth and width. Data limitations and inherent imbalances also presented challenges, potentially affecting the precision of our results. Although mitigation strategies were employed, they may not have completely nullified the imbalance effects. These limitations outline potential areas for further research, contributing to the ongoing discussion of Green AI. § RESULTS AND DISCUSSION The forthcoming section encapsulates an exhaustive examination of diverse machine learning algorithms and configurations. Our investigation seeks to provide an intricate understanding of their performance and environmental footprints, achieved through a multifaceted approach. The initial subsection <ref>, offers an in-depth comparative analysis of various evaluation metrics implemented in our study. These metrics, namely Accuracy, Area Under the Curve (AUC), Recall, Precision, F1 score, Kappa statistic, Matthews correlation coefficient (MCC), and F1 Macro, provide a comprehensive perspective on the performance of the assessed machine learning models and the importance of metric selection based on the task at hand related to anomaly detection. Subsequently, the section <ref> analyzes the environmental consequences associated with the operation of these machine learning models. Our evaluation encompasses factors such as time duration, CO2 equivalent, and energy consumption during the training, cross-validation, and inference stages, thereby shedding light on the sustainability aspects of each model. The subsequent subsection <ref> extends this environmental impact analysis to various configurations of Multilayer Perceptrons (MLPs). The discourse elucidates the trade-offs between model complexity and performance while also evaluating their efficiency concerning computational resource requirements. Finally, the subsection <ref> compares the results of traditional machine learning models with MLPs, thereby revealing important insights. This is complemented by a multi-objective comparison which unveils the complex interplay between performance, computational resource consumption, and environmental impact. A discussion of the computed Pareto optimal solutions concludes the section, highlighting the intricate trade-offs in the quest for high-performing yet environmentally sustainable machine learning models. This comprehensive analysis is envisaged to pave the way for future research into environmentally conscious machine learning model optimization. §.§ Metrics Evaluation Comparison In this subsection, we provide a detailed comparison of the test evaluation metrics for the different machine learning algorithms employed in our study. The objective is to assess and contrast the performance of these algorithms based on a range of metrics, including Accuracy, Area Under the Curve (AUC), Recall, Precision, F1 Score, Kappa Statistic, Matthews Correlation Coefficient (MCC) and F1 Macro. The table presented in this subsection (Table <ref>) showcases the evaluation results for each algorithm, allowing for a comprehensive analysis of their performance across various metrics. By examining these metrics, we can gain insights into the strengths and weaknesses of each algorithm, enabling a thorough understanding of their capabilities in addressing the specific task at hand. Furthermore, this comparison enables us to identify algorithms that excel in specific areas and those that provide a more balanced performance across multiple metrics. Our analysis focuses not only on the overall performance metrics but also on the execution time, as it plays a crucial role in real-world applications where efficiency is a critical consideration. Among the algorithms analyzed, the Random Forest Classifier stands out with exceptional F1 Macro, MCC and Kappa scores, making it proficient at precise predictions and effective class differentiation. It also exhibits high Recall and Precision rates, striking a balance between identifying positive instances and minimizing false positives. However, its longer execution time compared to other algorithms may be a trade-off to consider. Another powerful algorithm is Extreme Gradient Boosting (XGBoost), which excels in F1 Macro, AUC and MCC scores. Leveraging an ensemble of decision trees and gradient boosting techniques, XGBoost effectively handles complex relationships within the data. This makes it a compelling choice for various classification tasks, particularly when there is a need to deal with imbalanced datasets and produce high-quality predictions. In contrast, Logistic Regression demonstrates strong performance in Precision and AUC making it suitable for scenarios where linear decision boundaries suffice. However, its effectiveness in capturing non-linear relationships within the data may be limited, potentially affecting its performance in certain situations. The Extra Trees Classifier also exhibits high Recall and AUC scores. With its robustness against overfitting and ability to handle high-dimensional datasets, it is a viable option for complex classification tasks. Nonetheless, the random feature selection process of the Extra Trees Classifier may compromise interpretability. Additionally, the Ada Boost Classifier delivers excellent AUC, particularly excelling in handling imbalanced datasets and generating accurate predictions. However, it is important to note that this algorithm may require more computational resources compared to some other alternatives. The Decision Tree Classifier, on the other hand, performs well in all scores, owing to its proficiency in capturing complex relationships within the data. It is especially effective when dealing with non-linear decision boundaries. Nevertheless, decision trees are susceptible to overfitting, especially when confronted with noisy or high-dimensional datasets. Regularization techniques and ensemble methods, such as Random Forest, can be employed to mitigate this limitation and enhance the overall performance of the algorithm. In addition to the machine learning algorithms discussed, we also evaluate the performance of multi-layer perceptrons (MLPs) with various configurations (Table <ref>). MLPs, as neural network-based models, offer a different approach to classification tasks compared to traditional machine learning algorithms. While machine learning algorithms rely on statistical techniques and decision rules, MLPs utilize artificial neural networks with multiple layers of interconnected nodes. These layers allow MLPs to learn and capture intricate patterns and relationships in the data, making them suitable for complex classification problems. They exhibit strong performance across several common metrics, including Accuracy, Recall, Precision, and F1 score. These metrics indicate the models' ability to effectively capture patterns and make accurate predictions. However, a closer examination of specific metrics reveals variations among the configurations, highlighting their unique strengths and areas of specialization. MLP_5 utilizes a single hidden layer with 50 neurons and achieved remarkable AUC and MCC. This configuration demonstrates the ability to effectively capture and learn complex patterns in the data, leading to accurate predictions. MLP_7 stands out with its architecture consisting of a single hidden layer with 200 neurons. It showcases excellent AUC and Kappa. This configuration's deeper architecture allows it to capture more intricate relationships within the data, resulting in robust classification performance. MLP_4, featuring a deeper architecture with four hidden layers (100, 70, 50, and 20 neurons), demonstrates strong AUC but also has a good performance in Kappa, MCC and F1 Macro. The multi-layer structure enables it to capture intricate data representations, contributing to its classification effectiveness. When compared to the machine learning algorithms, some MLPs exhibit competitive performance across multiple metrics. Their strong classification capabilities make them viable alternatives for solving complex classification tasks. However, it is important to consider the specific characteristics of the dataset, computational resources, and interpretability requirements when selecting the most appropriate algorithm or MLP configuration. Depending on the specific context and constraints of the problem, a careful assessment is necessary to determine the optimal choice. §.§ Performance and Environmental Impact of Machine Learning Models During the training and cross-validation phases, the Decision Tree and Random Forest Classifiers stood out for their exceptional performance, boasting F1 Macro scores of 0.9101 and 0.9335, respectively. These models completed these phases with high efficiency, minimizing time taken, CO2 emissions, and energy consumption. The Extreme Gradient Boosting model also achieved a notable F1 Macro score of 0.9235, albeit with greater time and energy expenditure. Contrarily, the K Neighbors Classifier consumed significant resources without delivering high performance, reaching only a 0.6163 F1 Macro score. Naive Bayes and Quadratic Discriminant Analysis models also lagged in performance, failing to offer substantial improvements in time efficiency, CO2 equivalent, or energy consumption. In the inference and cross-validation phase, the trends generally remained consistent. The Decision Tree and Random Forest Classifiers again excelled in performance and efficiency. The K Neighbors Classifier, however, continued to struggle, consuming the most resources without substantial improvement in the F1 Macro score. From the findings, a trade-off between performance and environmental impact is apparent. The Decision Tree and Random Forest Classifiers demonstrate the possibility of high performance with reduced environmental impact. Comparing the single tree Decision Tree Classifier with the ensemble-based Random Forest Classifier reveals the balance between performance and computational cost. The latter, with its enhanced accuracy from multiple decision trees, outperforms the former but at higher computational requirements. Particularly during inference, where speed is often crucial, the Decision Tree model is inherently more efficient due to its simpler structure. Notably, its lower computational demand results in less energy use and CO2 emissions, making it a more eco-friendly choice. Future work can explore enhancing efficiency in high-performing models and improving performance in energy-efficient ones. §.§ Performance and Environmental Impact of MLPs During the training and cross-validation phase for MLPs, we observed an interesting balance between model complexity, performance, and resource efficiency. Simpler configurations, such as MLP_5 with 50 nodes in a single layer, were more time-efficient and consumed less energy across both CPU and GPU platforms. However, more complex configurations, like MLP_8 with two layers and 300 nodes, delivered higher F1 Macro scores, despite consuming more resources. This points to a clear trade-off between performance and resource efficiency during training. In the inference phase, similar trends persisted. Complex configurations like MLP_4 and MLP_7 exhibited higher F1 Macro scores, albeit at the cost of more time and energy. Conversely, simpler configurations, despite their lower F1 scores, failed to show significant improvement in time efficiency or energy consumption. These findings again highlight the trade-off between performance and complexity in MLPs, pointing towards the need for optimization strategies to balance performance and efficiency. When examining the effects of quantization, significant efficiency improvements were noted in the quantized (int8) MLP5 model on CPU, with reduced training time, CO2 emissions, and energy consumption compared to the original (fp32) model. Notably, this did not compromise performance, as F1 Macro scores and validation loss values remained consistent across all configurations. This demonstrates the potential of quantization for enhancing the efficiency of machine learning models without sacrificing performance. Future investigations could further explore the impact of quantization on various MLP configurations and tasks. §.§ Comparative Analysis and Key Observations The performance comparison between classic machine learning models and MLPs yields some interesting insights. Among the classic machine learning models, the Decision Tree and Random Forest classifiers proved to be the most effective, achieving high F1 Macro scores. Their superiority could be attributed to their inherent ability to handle both linear and non-linear data, making them versatile and efficient across diverse data sets. On the other hand, the MLPs showed potential for even higher performance, but the results were contingent on the configurations used. More complex configurations, despite being resource-intensive, delivered superior F1 Macro scores, indicating a trade-off between model complexity and performance. It's worth noting that while these configurations required more computational resources, they didn't necessarily compromise on performance, maintaining comparable F1 Macro scores to those of simpler configurations. In terms of environmental impact, the Decision Tree and Random Forest classifiers were observed to be the most efficient, with low time, CO2 equivalent, and energy consumption values, while maintaining high performance scores. This efficiency could be linked to their inherent simplicity and lower computational complexity compared to other models like the K Neighbors Classifier and Extreme Gradient Boosting. On the contrary, MLPs generally consumed more time and energy, especially when more complex configurations were used. However, the use of quantization significantly mitigated these effects, maintaining performance while reducing time, CO2 emissions, and energy consumption on the CPU platform. Interestingly, despite leveraging GPU capabilities, the MLPs didn't show a significant reduction in their environmental impact, suggesting an area for future exploration in optimization of MLP configurations for GPU use. Detailed visualizations of these trade-offs, presented in the appendix <ref>, provide further insights into how different models balance these factors. The Pareto front, computed for both the two-dimensional Figure <ref> and multi-objective scenarios Figure <ref>, revealed that the Decision Tree Classifier, Random Forest Classifier, Ridge Classifier, SVM Linear Kernel, and Linear Discriminant Analysis models presented the best compromise between high performance and low environmental impact. These models, found in the Pareto set, were not dominated by any other model, implying that no other model outperformed them across all objectives simultaneously. However, the "best" model choice from the Pareto set depends on specific priorities and constraints. For instance, a model that minimizes CO2 emissions might be preferred in contexts prioritizing environmental concerns, even at the cost of slight compromises on performance or longer computation time. Thus, the Pareto front presents a range of optimal solutions, each aligning with different acceptable trade-offs between performance and environmental impact. Future work could focus on further understanding these trade-offs and exploring strategies to extend the Pareto front, seeking higher performance with lower environmental impact. § CONCLUSION This research aimed to assess the performance and environmental impact of various machine learning algorithms and Multilayer Perceptrons (MLPs) configurations. Performance was measured through the F1 Macro score, and environmental impact was gauged through the time duration, CO2 equivalent, and energy consumed during training, cross-validation, and inference. Decision Trees and Random Forests emerged as top performers among the traditional machine learning algorithms with F1 Macro scores of 0.9101 and 0.9335, respectively. Additionally, these models demonstrated exceptional efficiency with regards to time, CO2 equivalent, and energy consumption. On the other hand, the K Neighbors Classifier consumed considerable resources but failed to achieve a high F1 Macro score, demonstrating the inequity between resource consumption and performance. Although Extreme Gradient Boosting achieved a high F1 macro score, it proved to be more resource-intensive than Decision Trees and Random Forests. As evidenced by their lower F1 macro scores, the Naive Bayes and Quadratic Discriminant Analysis models lagged in performance. In the case of MLPs, we observed a clear trade-off between performance and model complexity. In training and cross-validation, simpler configurations, such as MLP_5, excelled in terms of time and energy efficiency. Meanwhile, more complex configurations like MLP_8, despite being resource-intensive, delivered higher F1 Macro scores, underscoring that complexity can enhance performance. Similarly, the inference phase mirrored these findings, with MLP_4 and MLP_7 outperforming others on the CPU and GPU, respectively. According to the Pareto analysis, there is a trade-off between the environmental impact and the performance of the model. It was found that the Decision Tree Classifier, Random Forest Classifier, Ridge Classifier, SVM Linear Kernel, and Linear Discriminant Analysis were the most optimal solutions, striking the best balance between high performance and low environmental impact. It is important to keep in mind that the selection of the 'best' model will be determined by the specific context and priorities, as performance versus environmental concerns are weighted differently. It should be noted that while traditional algorithms such as Decision Trees and Random Forests have consistently displayed high performance, optimized MLP configurations have demonstrated even greater potential. Nevertheless, these enhanced results came at the cost of increased resource consumption, particularly during training. In contrast, leveraging GPUs over CPUs did not significantly enhance MLPs' performance, suggesting that GPU utilization can be optimized for future studies. The present study illustrates the importance of maintaining a delicate balance between model performance, complexity, and environmental impact, thus shedding light on future considerations for environmentally-conscious machine learning model optimizations. § ACKNOWLEDGMENTS The Detecta project is co-financed by the Ministry of Industry, Trade and Tourism of Spain through the line of support for Innovative Business Clusters, in its 2022 call for proposals under the Recovery, Transformation and Resilience Plan. elsarticle-num § APPENDIX
http://arxiv.org/abs/2307.03234v1
20230706180017
Evolution and Final Fates of a Rotating 25 M$_{\odot}$ Pop III star
[ "Amar Aryan", "Shashi Bhushan Pandey", "Rahul Gupta", "Sugriva Nath Tiwari", "Amit Kumar Ror" ]
astro-ph.HE
[ "astro-ph.HE" ]
affil=1,2, corresponding]AmarAryan affil=1]Shashi BhushanPandey affil=1,2]RahulGupta affil=2]Sugriva NathTiwari affil=1]Amit KumarRor [1]Aryabhatta research institute of observational sciences (ARIES), Nainital, Uttarakhand, India-263001 [2]Department of Physics, Deen Dayal Upadhyaya Gorakhpur University, Gorakhpur, Uttar Pradesh, India-273009 [email protected] and [email protected] Evolution and Final Fates of a Rotating 25 M_⊙ Pop III star [ 31st May 2023 =========================================================== In this proceeding, we present the 1-dimensional stellar evolution of two rotating population III (Pop III) star models, each having a mass of 25 M_⊙ at the zero-age main-sequence (ZAMS). The slowly rotating model has an initial angular rotational velocity of 10 per cent of the critical angular rotational velocity. In contrast, the rapidly rotating model has an initial angular rotational velocity of 70 per cent of the critical angular rotational velocity. As an effect of rotationally enhanced mixing, we find that the rapidly rotating model suffers an enormous mass loss due to the deposition of a significant amount of CNO elements toward the surface after the main-sequence phase. We also display the simulated light curves as these models explode into core-collapse supernovae (CCSNe). § INTRODUCTION Pop III stars refer to the first generation of stars, a captivating and enigmatic class of astrophysical objects that were thought to be born in the early Universe before the formation of any other stars. These primordial stars are believed to have formed from initial, pristine gas composed almost entirely of Hydrogen and Helium, lacking any heavier elements <cit.>. Because of their unique composition and lack of any coolant in the early Universe, Pop III stars are thought to have been much more massive than stars in the later generations <cit.>. They played a crucial role in shaping the Universe as their intense radiation ionized the surrounding gas to initiate the process of cosmic reionization <cit.>. While no Population III stars have been directly observed yet, their existence is supported by theoretical models <cit.> and indirect evidence <cit.>. The studies related to these first-ever stellar objects are the key to unveil the mysteries of the early Universe and are also very important to understand the origins of other Pop II and Pop I stars. There are multiple studies to understand the possible existence, evolution, and final fates of Pop III stars <cit.>. In this work, we investigate the cause of enormous mass loss in rapidly rotating model as it passes through various stages of its evolution. We also present the hydrodynamic simulations of synthetic explosions of the models at the onset of core collapse. We have divided this proceeding into four sections. We present a brief overview of the literature in Section <ref>. The numerical settings of the models to perform their stellar evolution are presented in Section <ref> while the methods to simulate the synthetic explosions are discussed in Section <ref>. Finally, we present our results and conclusions in Section <ref>. § EVOLUTION OF THE MODELS UPTO PRE-SN STAGE To perform the stellar evolution of the models, we utilise the modules for experiments in stellar astrophysics (MESA) with version number mesa-r21.12.1 <cit.>. Under the present work, we take two 25 M_⊙ ZAMS star models with zero metallicity and perform their 1-dimensional stellar evolution until they reach the onset of the core-collapse phase. The models are marked to have reached the onset of the core-collapse stage if any location within the star model hits an infall velocity of 500 km s^-1. The MESA settings for the calculations presented here are similar to the ones used in <cit.> and closely follow <cit.>. However, we list a few critical changes. We have performed the stellar evolution of two models with initial rotations (Ω/Ω_ crit) of 0.1 and 0.7, respectively. In this work, we have also investigated the effect of changing the wind scaling factor (η) from 0.5 to 1.0. The models presented in this work are so named that they contain information on initial ZMAS mass, metallicity, scaling factor, and rotation. The slowly rotating model named M25_Z0.00_η1.0_Rot0.1 indicates a star with ZAMS mass of 25 M_⊙, zero metallicity, η equals to 1.0, and an initial rotation of 0.1. Similarly, the rapidly rotating model named M25_Z0.00_η1.0_Rot0.7 indicates a star with ZAMS mass of 25 M_⊙, zero metallicity, η equals to 1.0, and an initial rotation of 0.7. The left-hand panel of Figure <ref> shows the variation of the core-temperature (T_ core) vs core-density (ρ_ core) curve as the models evolve from the ZAMS to core-collapse phase. The Pre-SN parameters are mentioned in Table <ref>. The right-hand panel of Figure <ref> shows the Pre-SN radii of the two models. The rapidly rotating model has undergone significant mass loss, resulting in a very small Pre-SN radius. In contrast, the slowly rotating model has retained most of its outer Hydrogen-envelope. Another effect evident as a result of increasing the η from 0.5 to 1.0 is an increased amount of lost mass in the rapidly rotating model considered here. Although the M25_Z0.00_Rot0.8 model from <cit.> has a more initial rotation than the M25_Z0.00_η1.0_Rot0.7 model here, the later has lost much more mass than earlier due to an increased η. Additionally, we find that the rapidly rotating model has suffered an enormous mass loss compared to the slowly rotating model. We have performed a diagnosis to explain this enormous mass loss. The four panels of Figure <ref> display the mass fractions of several elements after the main-sequence phase. As the model progresses on the HR diagram beyond the main-sequence, the fractions of CNO elements toward the surface increase, dramatically enhancing surface metallicity. The increased surface metallicity, in turn, enhances mass loss <cit.>. § EXPLOSION OF THE PRE-SN MODELS Once the models reach the core-collapse stage, we simulate their synthetic explosions utilising SNEC <cit.>. Most of the SNEC settings are similar to those in <cit.>. Here, we mention important modifications. We choose the “Piston_Explosion" option to simulate the synthetic explosion with a set of 700 grid cells using SNEC. For CCSNe, the “Piston_Explosion" might be the more realistic one since these SNe are thought to be arising due to the shock wave bouncing back from the neutron star. On the other hand, the “Thermal_Bomb" type is better suited for thermonuclear explosions, like Type Ia SNe. As we utilise the "Piston_Explosion" in SNEC, the first two computational cells in our model's profile are subjected to an outward velocity boost (in cm s^-1) provided by the "piston_vel" control. We choose "piston_vel = 4d9" for both models. The period of velocity boost lasts for 0.01 s. For each model considered in this work, we first excise the mass of the final remnant (M_ c), which is nearly the mass of the inert Iron-core. Additionally, we use an amount of 0.05 M_⊙ of ^56Ni synthesised for both the models. This quantity of synthesised ^56Ni is distributed between the excised central remnant mass cut and the preferred mass coordinate, which is in close proximity to the outer surface of the models. The difference in the Pre-SN mass (M_ Pre-SN) and the M_ c is the corresponding ejecta mass for each model. We present the detailed explosion parameters in Table <ref>. The left panel of Figure <ref> shows the bolometric luminosity light curves for the two models. The slowly rotating model has retained most of its outer Hydrogen-envelope; thus, its explosion results in a Hydrogen-rich SN. The bolometric light curve closely resembles the Type IIP SNe light curves. In contrast, the rapidly rotating model has suffered extensive mass loss. Thus, it explodes as a Hydrogen-stripped SN. The bolometric light curve from the rapidly rotating model mimics the light curves of Hydrogen-deficient Type Ib/c SNe. The right-hand panel of Figure <ref> displays the corresponding photospheric velocity evolution for the two models. The slowly rotating model resembles the photospheric velocities shown by Type IIP SNe, while the very high initial photospheric velocities resemble stripped-envelope SNe. § RESULTS AND CONCLUSIONS In this proceeding, we performed the 1-dimensional stellar evolution of two rotating Pop III models until they reached the stage of the onset of core collapse utilising MESA. Further, we also performed the hydrodynamic simulations of their synthetic explosions using the models at the onset of core collapse in appropriate form as input to SNEC. We enumerate our findings below: * We explicitly explored the cause of extensive mass loss in our rapidly rotating model by investigating the mass fraction plots at different stages after the main-sequence phase. We find that the increase in mass loss rates can be attributed to the dramatic increase in surface metallicity. * We found that increasing η from 0.5 to 1.0 also played an essential role in increasing the mass loss. * Unlike <cit.>, in this work, we simulated the piston-driven explosion. However, we hardly see much difference in our results. We thank the anonymous referee for providing constructive comments. We acknowledge the Belgo-Indian Network for Astronomy and astrophysics (BINA) consortium approved by the International Division, Department of Science and Technology (DST, Govt. of India; DST/INT/BELG/P-09/2017) and the Belgian Federal Science Policy Office (BELSPO, Govt. of Belgium; BL/33/IN12), for allowing us to present our work in the form of a proceeding. We duly acknowledge the extensive utilisation of ARIES's High-Performance Computing (HPC) facility. A.A. duly acknowledges the funds and support provided by the Council of Scientific & Industrial Research (CSIR), India, under file no. 09/948(0003)/2020-EMR-I. SBP and RG duly acknowledge the funds and support furnished by the Indian Space Research Organisation (ISRO) under the AstroSat archival Data utilisation grant DS_2B-13013(2)/1/2021-Sec.2. 0000-0002-9928-0369AmarAryan 0000-0003-4905-7801RahulGupta bullsrsl-en
http://arxiv.org/abs/2307.01953v1
20230704230657
Toward more frugal models for functional cerebral networks automatic recognition with resting-state fMRI
[ "Lukman Ismaila", "Pejman Rasti", "Jean-Michel Lemée", "David Rousseau" ]
cs.CV
[ "cs.CV" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: [email protected]. August 1, 2023 ================================================================================================================================================================= § INTRODUCTION Convolutional neural networks (CNN) are powerful tools to perform computer vision tasks. CNN are however very demanding in terms of energy, data and annotation due to the large amount of parameters to be tuned during their training. These limitations are specially important in medical imaging where the constitution of large cohorts of unhealthy patients can be a bottleneck as frequently observed in cases of rare diseases like brain tumor. Recently, we have shown the possibility to circumvent this limitation by the use of transfer learning from self-supervised training on healthy data to unhealthy data <cit.>. We used small data in our experiments, and approach opens the possibility for scalability when a larger model is trained from additional data acquired. This was obtained for the automatic recognition of functional cerebral networks via resting-state functional magnetic resonance imaging (rs-fMRI) <cit.> for patient with brain tumors. The CNN architecture proposed for the classification of functional brain network with 3D fMRI images by Ismaila, was observed with high model training parameters despite the small data size <cit.> which constitutes a complex model and struggles with risks of overfitting. In this work, we test possible ways to simplify deep learning models by reducing the overall parameter size. To this purpose, we propose to compare a basic CNN method with the approach depicted in <ref>. Based on a recent work by Gousia , which highlighted the benefits of graph encoding in optimizing CNN model parameters especially in medical imaging <cit.>. We investigate various ways of encoding the rs-fMRI 3D volume data in more compacted fashions and systematically compare our observation with the performance obtained in <cit.>. This effort only represent an initial attempt towards more efficient encoding of our brain volume images, as well as opens the possibility for scalability when a larger model is trained from additional data acquired. § DATABASE fMRI brain network activation image data of 81 healthy subjects and 55 unhealthy patients were collected. Regular volunteers provide the healthy data, while patients with brain tumors where a binary mask indicate region of lesion in the brain constitute the unhealthy data. This analysis, was done in separate components which creates brain maps of the regions with synchronous blood oxygen level dependent (BOLD) signal activity. In the data acquisition stage, we extracted the intrinsic connectivity networks (ICNs) by using methods that combine the information of both the temporal and spatial dimensions, such as independent component analysis. The extracted signals represent the neuro-anatomical basis for the functional networks in the brain <cit.>. The statistical parametric mapping (SPM) anatomy toolbox for Matlab was used to generate the 3D brain volume images, from the initial spatio-temporal fMRI signals. Among the 55 ICNs processed for each patients, 7 of these signals where recognized manually by experts to be biological networks of the brain such as Default Mode Network (DMN), Language Network (LANG), Right Fronto-parietal Control Network (rFPCN), Left Fronto-parietal Control (lFPCN), Salience Network (SAL), Dorsal Attention Network (DAN) and Ventral Attention Network (VAN). The annotated images were used in two versions: full images (connectivity map) and corresponding thresholded images. § SPATIAL DIMENSION REDUCTION One may wonder if the entire 3D volume in gray levels is fully informative for automatic recognition of the functional cerebral networks. Several dimension reduction approaches can be envisioned. From the acquired brain volumes of resting-state fMRI images 42 px×51 px×34 channels, we normalized the pixel intensity range to 0-1 and computed several reduced version of these raw data as depicted in <ref>. First, one can reduce the number of spatial dimension via a projection. We produced 2D gray level image by performing lmttMean operation on pixel intensity across the axial (A) plane as shown in <ref> Secondly, to understand whether the intensity of the activation map holds discriminative information, we created 2D binary images by performing an lmttOR operation in respect to sagittal, coronal and axial (SCA) plane respectively, which were further stacked together to provide SCA binary stack image. Also, we performed another lmttOR operation across the axial plane to obtain a 3D binary volume image which overall, resulted in 4 variants of generated images as illustrated in <ref>. Lastly, we tested if the full resolution of voxels is necessary for the classification of the functional network, which are rather formed by large structures than fine details. To this purpose, segmentation of the gray level activation map was performed using SLIC algorithm <cit.>. We processed the 2D segmented labels to obtain a superpixels image, while the 3D segmented labels provided the supervoxels image as shown in <ref>. Furthermore, we averaged (smoothened) the pixel intensities within each segment of our superpixels and supervoxels images. This step allows us to evaluate the integrity of the functional brain network features which was done by training a CNN model for 7 distinct functional brain network classification using the generated superpixels/supervoxels images. When using the dimension reduction from 3D to 2D or from grey level to binary images, we observe performance drop as provided in <ref>. This suggests that, there is information in the gray level distribution and the 3D shape of the network which are not preserved via the simple spatial dimension reduction tested. By contrast, the values in <ref> represent the functional brain network classification results with CNN model using pixels, superpixels and supervoxels data respectively. Interestingly the loss of performance is very limited when one reduces the gray levels to the average value of the pixels inside a supervoxel or even a superpixel image. Therefore, despite the spatial dimension reduction tested, the reduction of the number of parameters in the models is so far very limited or negligible. To produce this reduction of the model, we proposed to encode the most promising dimension reduction technique (supervoxels) in a compact way as described in the next section. § GRAPH ENCODING To further benefit from the spatial dimension reduction of the previous section, we investigate the possibility to reduce the complexity of the associated neural networks models with limited reduction of performance on the functional cerebral network recognition. To this purpose, we consider to encode our supervoxelized images into graphs. Commonly in graphs, interacting nodes are connected by edges whose weights can be defined by either temporal connections or anatomical junctions, because, graphs are naturally good at relational organization between entities, which makes them great option for representing the 3D capture of voxelwise signals mapped to a specific region of the brain <cit.>. Therefore, a possibly efficient representation of these fMRI network activations in images can be tested using a graph relation network, which connects nodes of related regions via graph edges. To obtain a graph representation of our supervoxels images, we connected the segmented neighboring regions through an edge, and denoted the center of each region as a graph node, segment-wise attributes were encoded as node spatial embeddings. This step was repeated until all neighboring nodes were traversed (see <ref>). We implemented this approach using the region adjacency graph technique <cit.>, which simply represents each region of the segment as graph nodes and the link between two touching regions as edge using the provided labels of the segmented regions <cit.>. From the extracted relative spatial coordinates of each superpixel of our image data via the cartesian function, we computed the node position as edge attribute (pos[i] - pos[j]) via k-NN graph transformation. The number of supervoxels was fixed empirically based on the typical size of the activation spots. The resulting graphs from the encoding stage were observed to be structurally indistinguishable from the connectivity point of view. The contrastive information is expected to stand on the distribution of edge values, which differ from one structural network map to another. We implemented our method using SplineCNN, a graph neural network which uses a novel type of spline-based convolutional layer for learning <cit.>. This state-of-the-art GNN is suitable for image-based graph classification task because, it allows the capture of local patterns using spatial relationship between graph nodes by performing global graph pooling. We trained our model parameters with 2 convolutional layers and 2 fully connected output layers with indication of 7 classes in the output layer and a softmax activation. Best results were obtained by training with 2-step learning rate values of 1e-3 for epochs 0-200 and 1e-5 for epochs 200-500 with early stopping. For fair comparison with the best result obtained with CNN model in <cit.>, we performed transfer learning during the training of the CNN and GNN models using 80% - 10% - 10% ratio for train-validation-test data slit respectively, as well as early stopper with patient set to 10 misses. The performance provided in <ref> shows the recorded result from fMRI functional network classification using this transfer learning strategy. Brute transfer indicates the strategy of training directly on healthy data and testing on unhealthy data for both CNN and GNN models. In this cohort, results were compared with values from training and testing on unhealthy data using CNN and GNN model, which provided the 1^st baseline and 2^nd baseline values of 0.75 ± 0.01 and 0.64 ± 0.03 respectively, while 0.78 ± 0.01 and 0.70 ± 0.01 were recorded in the transfer learning approach with CNN and GNN respectively. As a consequence, we demonstrate the possibility to obtain a compression of a factor of 26 on the number of model parameters after supervoxeization and graph encoding with only a reduction of 8%. § CONCLUSION In this study, we investigated ways to reduce the complexity of end-to-end machine learning models based on convolutional neural networks for the automatic recognition of functional cerebral networks via resting-state fMRI data. A compaction of the activation maps into superpixels or supervoxels shows limited impact on the classification performance. We emphasize the anticipated influence of our 3D multi-channel images in model parameters, which motivates exploration of a dimension reduction technique before introducing the graph encoding technique. Model evaluation based on spatial dimension reduction was done to investigate its minimal influence in reducing our model parameter. However, this stage was important towards more efficient data encoding (graph structure), which was later shown to have significantly reduced the model parameter. Our initial encoding effort produces a compression of a factor 26× where associated reduction in performance was observed at only 8%. The effort to reduce the complexity of the models was concentrated on the encoding approach of our fMRI data. It would naturally be interesting to couple such effort with investigation on the architecture of the models <cit.>. IEEEtran
http://arxiv.org/abs/2307.02204v1
20230705110300
Does entanglement enhance single-molecule pulsed biphoton spectroscopy?
[ "Aiman Khan", "Francesco Albarelli", "Animesh Datta" ]
quant-ph
[ "quant-ph", "physics.atom-ph" ]
APS/123-QED [email protected] Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, via Celoria 16, 20133 Milan, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Milano, via Celoria 16, 20133 Milan, Italy [email protected] Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom It depends. For a single molecule interacting with one mode of a biphoton probe, we show that the spectroscopic information has three contributions, only one of which is a genuine two-photon contribution. When all the scattered light can be measured, solely this contribution exists and can be fully extracted using unentangled measurements. Furthermore, this two-photon contribution can, in principle, be matched by an optimised but unentangled single-photon probe. When the matter system spontaneously emits into inaccessible modes, an advantage due to entanglement can not be ruled out. In practice, time-frequency entanglement does enhance spectroscopic performance of the oft-studied weakly-pumped spontaneous parametric down conversion (PDC) probes. For two-level systems and coupled dimers, more entangled PDC probes yield more spectroscopic information, even in the presence of emission into inaccessible modes. Moreover, simple, unentangled measurements can capture between 60% - 90% of the spectroscopic information. We thus establish that biphoton spectroscopy using source-engineered PDC probes and unentangled measurements can provide tangible quantum enhancement. Our work underscores the intricate role of entanglement in single-molecule spectroscopy using quantum light. Does entanglement enhance single-molecule pulsed biphoton spectroscopy? Animesh Datta August 1, 2023 ======================================================================= § INTRODUCTION Over the last couple of decades, entangled states of quantum light have been explored for their potential use in linear and nonlinear optical spectroscopy. Linear absorption spectroscopy in a `biphoton' setup — employing a single one-photon interaction of the sample with the signal mode <cit.> of an entangled state — has been experimentally performed on glass <cit.>, crystalline <cit.>, and nanoparticle <cit.> samples. These offer a larger coincidence signal-to-noise ratio (SNR) compared to photon-counting measurements of classical light. More recently, coherent non-linear optical spectroscopies using quantum light have been proposed and theoretically studied using a fully quantum mechanical approach <cit.>. Amongst these, two-photon absorption of entangled biphoton states has garnered much attention <cit.>. The supposed improved performance of these quantum spectroscopic methods over classical ones is often attributed to the availability of control variables such as the entanglement time that do not have classical counterparts <cit.>, or more generally to the absence of the usual Fourier limit on joint temporal and spectral resolutions of entangled photons <cit.>. However, the improved SNR in the biphoton experiments may be attributed to the use of coincidence counters with sufficiently small time windows, and not to the inherent quantum correlations of the entangled, non-classical light. In fact, Stefanov has mathematically shown that probability distributions of outcomes of uncorrelated measurements on the two spatially distinguishable modes of the outgoing biphoton state can always be mimicked with a `classical' incoming ensemble with identical frequency correlations between the two photons, and no entanglement <cit.>. Entanglement with an additional `idler' mode then seemingly provides no advantage, — or disadvantage, when the subsequent measurements on signal and idler modes are independent. The general question of the role of entanglement in quantum light spectroscopy is more subtle and intricate. It depends on the location—at the input or at the measurement stage, on the model of the matter sample and the light-matter interaction, and on the nature and strength of the coupling to both photonic and phononic environments. It also depends on the type of light used as probe (and consequently the type of entanglement, such as between time-frequency modes, photon numbers, or some other degrees of freedom), as well as on the time of measurement compared to the typical matter system timescale(s), its type, and the detection system used. Finally, it depends on what spectroscopy is defined to be. Mathematically, a larger probability — of absorption, emission, or general quantum measurement outcomes — is not identical to a higher precision of estimating unknown parameter(s) of the model matter system <cit.>. The latter is typically the operational goal of spectroscopy. In this paper, we elucidate the role of time-frequency quantum entanglement in biphoton spectroscopy in the precise estimation of unknown parameters of individual molecules. As illustrated in Figure <ref>, it captures spectroscopic setups in which only one of the spatially distinct travelling modes interacts with the sample, the state of light itself carrying two excitations. Our interaction model is motivated by the coherent coupling of single molecules (with strength Γ) with optical fields. We also include a photonic environment characterised by the coupling strength Γ_⊥ modelling emission into inaccessible modes. Our conclusions are based on the quantum information-theoretic methodology for quantum light spectroscopy developed in our recent paper <cit.>. Of proximate experimental relevance, we establish the functional utility of time-frequency entanglement for the spectroscopy of a two-level system (TLS) and a coupled dimer (CD) using states generated from weakly-pumped spontaneous parametric down conversion (PDC). Such states are typical of spectroscopic techniques employing entangled light <cit.>. Our choice of matter systems also has immediate practical relevance. Under certain symmetries of the dipole operator or isotropic pumping, alkali-metal atomic vapours are approximately TLS <cit.>. The CD is often employed as a minimal model <cit.> for exciton-hopping quantum dynamics in a wide class of light-harvesting complexes such as the 8-site Fenna-Matthews-Olsen (FMO) complex and the 27-site B800-B850 light-harvesting-2 complexes. It is more substantively relevant for the cyanobacterial allophycocyanin (APC) complex <cit.>, as well as conjugated polymers <cit.>. Our main results for single-molecule biphoton spectroscopy are as follows: * At long times, the spectroscopic information is bounded by a sum of three — a two-photon, a one-photon, and a classical, contributions [Eq. (<ref>)]. * Entangled measurements across the signal and idler modes are not necessary to attain the bound in Eq. (<ref>) [see Section (<ref>))]. * Time-frequency entanglement of the input probe provides no in-principle advantage when Γ_⊥ = 0 [see Section (<ref>)]. * A class of optimised unentangled measurements always yield more information than any measurements on the signal photon state only [Eq. (<ref>)], i.e., by tracing out the idler mode. * Time-frequency entanglement of PDC probes enhances the spectroscopy of TLS and CD systems. This can be used to engineer PDC sources for practical quantum-enhanced spectroscopy. [See Section (<ref>)]. Our results advance the understanding of single-molecule spectroscopy using entangled light. They pave the path towards capturing experimental scenarios that could be the successors of absorption-based techniques <cit.> or fluorescence-based ones such as single-molecule pump-probe (SM2P) spectroscopy <cit.> using quantum light. To the best of our knowledge, all these single-molecule techniques have employed pulses of classical light, with or without a fixed phase relation between them. Our work also stand apart from other recent ones on “quantum-enhanced” spectroscopy that model the ensemble matter system as an infinite chain of beamsplitters <cit.>. In addition to treating both the light and matter quantum mechanically, our work advances the quantum-information theoretic understanding of spectroscopy using entangled light. It does so by clarifying the spectroscopic potential of entangled measurements across signal and idler modes, and a simpler class using local operations and classical communication (LOCC). These place our work beyond that of Stefanov <cit.>. The paper is structured as follows: in Section <ref>, we recast quantum spectroscopy as a local estimation problem, which will be the basis of our evaluations of fundamental error bounds. In Section <ref>, we describe the fully quantum model of pulsed light-matter interaction in biphoton spectroscopy, as well as define the form of the most general incoming biphoton probes that can be employed in the spectroscopic setup in Figure <ref>. In Section <ref>, we calculate explicit expressions for the quantum Fisher information (QFI) of the outgoing state for arbitrary incoming biphoton states and in the light-matter interaction Hamiltonian for the biphoton setup. In Section <ref>, we will establish that unentangled measurements can attain the QFI, and also identify near-optimal measurements that should be more practical to implement. In Section <ref>, we apply the theoretical machinery of the previous sections to the experimentally viable PDC states, for which we also show that entanglement has a functional usefulness in TLS and CD spectroscopy. Finally, we conclude in Section <ref>. § QUANTUM LIGHT SPECTROSCOPY AS AN ESTIMATION PROBLEM Spectroscopy uses light — quantum or classical, to probe matter systems via travelling field states. Following the light-matter interaction, the probe(s) (P) carry away information about parameters of the matter (M) system. When both the light and the matter are described quantum mechanically, the quantum state of the probe light just before detection, can be generally represented as ρ^P_out(θ) = Tr_ME( 𝒱^θ_lm[ρ^M⊗ρ^P⊗|0^E⟩⟨0^E|] ), where ρ^P is the incoming probe state of light, ρ^M is the initial state of the matter system, 𝒱^θ_lm captures the quantum interaction between light and matter and labelled by the single, real physical parameter θ that is to be estimated. |0^E⟩≡⊗_l|0^l⟩ captures all environmental modes (E) of the electromagnetic field that may couple to the matter system M, unoccupied at the start of the experiment. The tracing out of the matter and the environmental modes captures the fact that these parts of the global state are inaccessible to the measurement apparatus. The parametric model corresponding to POVM measurement {M_i:  M_i>0, ∑_i M_i = I^P} on the output state ρ^P_out(θ) is given by the Born rule {p(i|θ) = Tr[ ρ^P_out(θ)M_i ] | θ∈R}. Statistical inference then involves constructing estimators θ̂ = θ(X_1,X_2,…,X_n), where {X_i} are random variables corresponding to each of n measured values, independent and identically distributed. The variance of the estimator V(θ|{M_i}) = 𝔼_θ[(θ̂-θ)]^2 is the mean square error of the estimator statistic (𝔼_θ denotes expectation with respect to X_1,X_2,…,X_n  p(i,θ)). It is upper-bounded by the Crámer-Rao bound (CRB) <cit.> V(θ|{M_i}) ≥1/n 𝒞(θ|{M_i}), and 𝒞(θ|{M_i}) is the (classical) Fisher information, defined as 𝒞(θ|{M_i}) = Var_θ[ ∂/∂θlog  p(i|θ) ] = -𝔼_θ[ ∂/∂θlog  p(i|θ) ]^2. The model, and therefore the optimal estimators themselves, depend on the POVM {M_i}. A stronger and more fundamental bound on the precision of estimating θ can be obtained by minimising V(θ|{M_i}) over all possible measurements {M_i} allowed by the laws of quantum mechanics. This is referred to as the quantum Crámer-Rao bound (QCRB) <cit.> V(θ|{M_i}) ≥1/n 𝒞(θ|{M_i})≥1/n 𝒬(θ;ρ^P_out(θ)), where 𝒬(θ;ρ^P_out(θ)) is the quantum Fisher information (QFI) corresponding to the parameter θ in the outgoing state ρ^P_out(θ), 𝒬(θ;ρ^P_out(θ)) = Tr ( ρ^P_out(θ) L_θ^2 ) ≥𝒞 (θ|{M_i}), with the self-adjoint symmetric logarithmic derivative (SLD) operators defined via L_θ ρ^P_out(θ) + ρ^P_out(θ) L_θ = 2 ∂ρ^P_out(θ)/∂θ . For the estimation of a single parameter θ, the corresponding QCRB can be saturated by the projective measurement corresponding to eigenvectors of the SLD operator L_θ <cit.>. For rank-deficient ρ^P_out(θ), however, this is only a necessary condition and eigenvectors of the SLD operator are only one of many QCRB-saturating POVMs <cit.>. Indeed, for pure states ρ^P_out(θ) = |ψ_θ⟩⟨ψ_θ|, the SLD operator has the simpler form L_θ = |∂_θψ_θ⟩⟨ψ_θ| + |ψ_θ⟩⟨∂_θψ_θ| and the QFI is 𝒬(θ;ρ^P_out(θ)) = 4 ( ⟨∂_θψ_θ|∂_θψ_θ⟩ - |⟨ψ_θ|∂_θψ_θ⟩|^2 ). For a general mixed state expressed as its spectral decomposition, the QFI is <cit.> 𝒬(θ;∑_n p_n |ψ_n⟩⟨ψ_n|) = ∑_n (∂_θ p_n)^2/p_n + ∑_n 4p_n ⟨∂_θψ_n|∂_θψ_n⟩ -∑_m,n8p_m p_n/p_m + p_n |⟨∂_θψ_m|ψ_n⟩|^2 . § LIGHT-MATTER INTERACTION FOR A BIPHOTON PROBE The dynamics of a quantum matter system interacting with quantised light can be described via the Hamiltonian H = H^M + H^F + H^MSE, where H^M corresponds to matter M dynamics only, H^F is the free field Hamiltonian corresponding to the incoming signal (S) and idler (I) modes, as well as the electromagnetic environmental E modes. Each term in H^MSE is of the dipole-field coupling form -d⃗.E⃗, where d⃗ is a transition dipole moment operator for the matter system, and E⃗ is the (total) quantised electric field operator (see Figure <ref>). The dipole coupling term is appropriate for single molecules which are small compared to typical optical wavelengths, thus allowing the dipole approximation <cit.>. While our arguments can be extended to general molecular systems, we restrict our discussion in this paper to vibrationless P-site Hamiltonians (P=1,2) of the form H^M = ∑_j=1^Pħω_j|j⟩⟨j| + ∑_i≠ j J_ij|i⟩⟨j|, where |j⟩ is the excited level corresponding to the j-th site (with frequency ω_j), and J_ij is the Coulomb dipole-dipole coupling between sites i and j. P=1 corresponds to a two-level system (TLS) Hamiltonian, whereas P=2 corresponds to the coupled dimer (CD) system, composed of two sites that are coupled to each other via the single coupling constant J. The transition dipole operator connecting the ground state of the matter system with the singly-excited manifold (SEM) is of the form d⃗ = ∑_j (μ⃗_jg|g⟩⟨j| + h.c.) where the matrix elements are μ⃗_jg = ⟨ g|μ⃗|j ⟩. The free field Hamiltonian can be decomposed into a countably infinite number of one-dimensional (1-D) electromagnetic fields <cit.>, H^F = ∑_ϵ⃗ ∫ d^3k⃗ ħ c|k⃗| a_ϵ⃗^†(k)a_ϵ⃗(k) = ∑_l  ∫_0^∞ dω ħω a_l^†(ω)a_l(ω), where k⃗ and ϵ⃗ are respectively the wavevector and polarisation indices for the electromagnetic mode, and the subsequent index l labels the resulting 1-D modes. Although the sum over the index l necessarily runs to infinity and subsumes both the incoming signal/idler modes, as well as the environmental E modes, we only need consider, in the description of the light-matter interaction, modes that couple to the matter system M due to H^MSE itself. These modes can be identified using the slowly-varying envelope approximation (SVEA) in the optical domain, where the frequency bandwidth of the incoming field is assumed to be much smaller than the carrier wave frequency B≪ω̅_S. Furthermore, the propagating, incoming beam of the signal S arm can be approximated to be paraxial. 1-D quantisation of the solutions of the classical paraxial equation for the signal arm along the direction of propagation yields <cit.> E⃗_S(t) = iϵ⃗_S 𝒜_S(ω̅_S) ∫_-∞^∞dω_S  a_S(ω_S)e^-iω_S t, where 𝒜_S(ω̅_S) = √(ω̅_S/2ϵ_0 c A ħ) is the collective pulse factor (A is the transverse quantisation area of the signal beam), and ϵ⃗_⃗S⃗ denotes the unit polarisation vector of the signal beam. Note that the emergent Fourier transform of the field operators can be notated as a_S(t) = 1/√(2π) ∫_-∞^∞dω_S  a_S(ω_S)e^-iω_S t. These are known as white-noise operators and are δ-correlated in time as [a_S(t),a_S^†(t')] = δ(t-t'), where δ(t) is the Dirac delta function. Description of the electromagnetic environment E is simplified by the practical fact that they are inaccessible to experiments, and must be considered in terms of their effects on reduced dynamics only. This effect can then be recovered by using a single bosonic degree of freedom (as opposed to the infinitude of environmental spatial modes that the matter system M may decay into), labelled by the `b' operators: E⃗_E(t) = iϵ⃗_E 𝒜_E(ω̅_S) ∫_-∞^∞dω_E  b(ω_E)e^-iω_E t , where 𝒜_E(ω̅_S) is a collective factor characterising the effect of the continuum electromagnetic environment. For a more detailed description, see Ref. <cit.>. In the interaction frame generated by H_0 = ∑_j ħω̅_S|j⟩⟨j| + H^F, where the field Hamiltonian can be taken to include only modes that participate in the interaction H^F = ∫ dω ħω a_S^†(ω)a_S(ω) + ∫ dω ħω b^†(ω)b(ω), the total Hamiltonian of the biphoton setup in Figure <ref> is H(t) = H_I^M - iħ ( √(Γ) L^†⊗ a_S(t)⊗1^I⊗1^E   + √(Γ_⊥) L^†⊗1^S⊗1^I⊗ b(t) - h.c.) , where H_I^M = ∑_j ħ(ω_j-ω̅_S) + ∑_j≠ k J_jk|j⟩⟨k|, and 1^S (1^I) is identity operator on the signal (idler) Hilbert space. Further, the collective dipole operators, weighted by the strength of interaction, are √(Γ) L = √(2π) 𝒜_S(ω̅_S)∑_j (ϵ⃗_S.μ⃗_jg)/ħ |g⟩⟨j|, √(Γ_⊥) L = √(2π) 𝒜_E(ω̅_S) ∑_j (ϵ⃗_E.μ⃗_jg)/ħ |g⟩⟨j|. §.§.§ Incoming Probe State of Entangled Light An arbitrary incoming biphoton state in terms of continuous frequency variables ω_S and ω_I, corresponding to signal and idler modes respectively, can be expressed as |Φ_biph⟩ = ∫ dω_S∫ dω_I Φ̃(ω_S,ω_I) a^†_S(ω_S)a_I^†(ω_I)|0⟩ where Φ̃(ω_S,ω_I) is the joint spectral amplitude (JSA) of the entangled state in frequency space, and contains all two-photon correlations. The bivariate JSA function admits a Schmidt decomposition <cit.> Φ̃(ω_S,ω_I) = ∑_n r_n ξ̃_n^S(ω_S)ξ̃_n^I(ω_I), which in turn can be used to express the biphoton state in Eq. (<ref>) in terms of discrete orthonormal Schmidt modes |Φ_biph⟩ = ∑_n r_n a^†_n,Sa^†_n,I|0⟩≡∑_n r_n |ξ_n^S⟩|ξ_n^I⟩, where a^†_n,S = ∫ dω_S ξ̃_n^S(ω_S)a_S^†(ω_S), a^†_n,I = ∫ dω_I ξ̃_n^I(ω_I)a_I^†(ω_I), are Schmidt mode creation operators, and |ξ_n^X⟩ = a^†_n,X|0⟩, X = S,I are the corresponding Schmidt basis kets. We make the additional assumption that the idler spectral amplitude is peaked around a central frequency ω̅_I, so that a_I^†(ω_I)→ a_I^†(ω_I-ω̅_I) yields idler operators peaked around ω_I = 0. This is justified if the biphoton entangled states are produced in physical processes in which (one or more) pump pulse photons, derived from a spectral amplitude distribution centred around some central frequency ω̅_P, are converted into daughter signal (centred around ω̅_S) and idler (centred around ω̅_I) photons. For biphoton states produced in the weak downconversion limit of type-II spontaneous PDC using χ^(2)-nonlinear crystals, conservation of energy dictates that ω̅_P = ω̅_S+ω̅_I. On the other hand, for biphoton states produced in four-wave mixing (FWM) schemes <cit.> that exploit χ^(3)-nonlinearities, conservation of energy dictates (for degenerate schemes) that 2ω̅_P = ω̅_S+ω̅_I. This assumption is distinct from the SVEA, and is motivated by the details of the physical process used to produce the biphoton entangled state. In terms of the re-centred signal and idler field operators, the JSA of the biphoton state in Eq. (<ref>) then correspondingly transforms as Φ̃_biph(ω_S,ω_I)→Φ̃_biph(ω_S-ω̅_S,ω_I-ω̅_I). In the rest of this paper, we assume that the JSA fucntion Φ̃_biph(ω_S,ω_I), as well as photon operators a_S^†(ω_S) and a_I^†(ω_I) to have been appropriately re-centred so that ω̅_S=ω̅_I=0. Positing corresponding white noise operators obtained as Fourier transforms of centred idler operators, a^†_I(t_I) = 1/√(2π)∫ dω_I e^-ω_I t_Ia_I^†(ω_I), we can obtain an equivalent representation of the biphoton entangled state in terms of operators a_S^†(t) and a_I^†(t), |Φ_biph⟩ = ∫ dt_S∫ dt_I Φ_biph(t_S,t_I) a_S^†(t_S) a_I^†(t_I)|0⟩, where the time-axis joint temporal amplitude (JTA) Φ_biph(t_S,t_I) is the two-dimensional Fourier transform of the centred frequency-axis JSA Φ̃(ω_S,ω_I), Φ_biph(t_S,t_I) = 1/2π ∫ dω_S∫ dω_I e^i(ω_S t_S + ω_I t_I) Φ̃_biph(ω_S,ω_I). The JTA Φ_biph(t_S,t_I) then admits an analogous Schmidt decomposition Φ_biph(t_S,t_I) = ∑_n r_n ξ^S_n(t_S)ξ^I_n(t_I), where ξ_n^X(t_X) = 1/√(2π) ∫ dω_X e^iω_X t_X ξ̃^X_n(ω_X), X=S,I, are the time-domain Schmidt basis functions. § QFI FOR A BIPHOTON PROBE We start with the molecule in its ground state |g⟩⟨g| at t=0. Then, the joint SI-E state at asymptotically long times t, where max[Γ,Γ_⊥]t≫1, is effected by the completely-positive, trace-preserving (CPTP) map 𝒲_g[ρ^SI⊗|0^E⟩⟨0^E|] =                                                             Tr_M[lim_ t→∞ U(t) |g⟩⟨g|⊗ρ^SI⊗|0^E⟩⟨0^E| U^†(t)], where U(t) = 𝒯[exp(-i/ħ∫_-∞^tdt'H(t')) ] is the unitary propagator corresponding to Eq. (<ref>), and ρ^SI is the signal-idler probe state. At t →∞, the molecule decays back to the ground state |g⟩. Thus, for a pure biphoton input ρ^SI = |Φ_biph⟩⟨Φ_biph|, the transformed state 𝒲_g [|Φ_biph⟩⟨Φ_biph|⊗|0^E⟩⟨0^E|] is also pure. The linearity of the CPTP map 𝒲_g can be employed to obtain the outgoing SI-E state as the piecewise transformation of the Schmidt component wavefunctions on the signal-environment SE subspace, while the idler I components remain unchanged, giving 𝒲_g[∑_n r_n |ξ_n^S⟩|ξ_n^I⟩⊗|0^E⟩] = ∑_n r_n 𝒲_g[|ξ_n^S⟩|ξ_n^I⟩⊗|0^E⟩] =∑_nr_n( |ϕ_n^S⟩|ξ_n^I⟩⊗|0^E⟩ + |0^S⟩|ξ_n^I⟩⊗|π_n^E⟩), where the first term captures the signal photon being emitted into its original mode after absorption, while the second captures the absorbed signal photon being emitted into the environment. Here, the n-th components |ϕ_n^S⟩ = |ξ_n^S⟩ - Γ |ε_n^S⟩, |π_n^E⟩ = -√(ΓΓ_⊥) |ε_n^E⟩ are obtained using the single-mode solutions <cit.> as used in Ref. <cit.>. We have abbreviated the distortion in the n-th Schmidt mode of the signal space as |ε_n^S⟩ = ∫_-∞^∞ dt_1 [ ∫_-∞^t_1dτ f_M(t_1-τ) ξ_n^S(τ)]a_S^†(t_1)|0⟩, where f_M(t) = ⟨ g| L exp[ ( -iH^M_I - Γ+Γ_⊥/2 L^†L )t] L^† |g⟩ is the characteristic response function of the molecule M. Analogous definitions can be made for |ε_n^E⟩. Partial trace over the E subspace yields the outgoing state as the mixture of single photon (in the idler mode) and biphoton states, ρ_biph,out^SI = (1-N) |Φ_biph,out⟩⟨Φ_biph,out| + N|0^S⟩⟨0^S|⊗σ^I where |Φ_biph,out⟩ = 1/√(1-N) ∑_n r_n|ϕ_n^S⟩|ξ_n^I⟩, the normalisation factor being N = ΓΓ_⊥∑_n r_n^2 ⟨ε_n^S|ε_n^S⟩, and σ^I = ΓΓ_⊥/N ∑_mn r_m r_n ⟨ε_n^E | ε_m^E⟩ |ξ_m^I⟩⟨ξ_n^I| is the conditional idler state when the excitation due to the signal is lost to E. Note that the transformed signal states are no longer orthonormal ⟨ϕ_m^S|ϕ^S_n⟩ = δ_mn - ΓΓ_⊥ ⟨ε_m^S|ε_n^S⟩. This is, however, recovered in the limit of perfect coupling so that lim_ Γ_⊥ → 0 ⟨ϕ_m^S|ϕ_n^S⟩ = δ_mn. As σ^SI = |0^S⟩⟨0^S|⊗σ^I has no excitation in the S space, whereas |Φ_biph,out⟩ does, the two contributions Eq. (<ref>) to the mixture live in mutually orthogonal subspaces. Thus, ⟨Φ_biph,out | σ^SI | Φ_biph,out⟩ = 0 yielding the form of the QFI <cit.> of the outgoing state with respect to the Hamiltonian parameter θ as 𝒬(θ; ρ_biph,out^SI) = 𝒞(N,1-N) +                             N𝒬(θ;σ^I) + (1-N) 𝒬(θ; |Φ_biph,out⟩ ), where 𝒞(N,1-N) = N_θ/N(1-N) (with N_θ≡∂_θN) is the Fisher information associated with classical mixing of the |Φ_biph,out⟩ and |0^S⟩⟨0^S|⊗σ^I quantum states. The conditional idler QFI 𝒬(θ;σ^I) can be obtained by solving Eq. (<ref>) for σ^I, and using Eq. (<ref>). The biphoton QFI term can be shown to be (for details, see Appendix <ref>) 𝒬(θ;|Φ_biph,out⟩) = 1/1-N∑_n |r_n|^2⟨∂_θϕ_n^S|∂_θϕ_n^S⟩ - 1/(1-N)^2| ∑_n |r_n|^2⟨ϕ_n^S|∂_θϕ_n^S⟩|^2. Eq. (<ref>) is one of our main results, that the spectroscopic information about the molecule M has three distinct contributions - from the biphoton state whose signal mode is modified by its interaction with M, the one-photon idler state when the absorbed photon is lost to E, and finally the classical mixture of the two. In the absence of entanglement, that is a product JSA where r_n=0 ∀ n>1, σ^I = 1^I, and 𝒬(θ;σ^I) = 0. Eq. (<ref>) then reduces to single-photon spectroscopy <cit.>. In the presence of entanglement, the contributions of the three terms in Eq. (<ref>) depend on the relative magnitudes of M-S and M-E coupling. These corresponds to different flavours of experimental setups — in free space scenarios where Γ_⊥≫Γ, the first two terms dominate in Eq. (<ref>) as most of the signal excitation are lost to the E space. In contrast, for geometries engineered such that Γ_⊥≪Γ, the biphoton QFI will be the major contributor as few excitations are lost to E. This is summarised in Table <ref>. §.§ Attaining the QFI in Eq. (<ref>) The three terms in Eq. (<ref>) may be successively saturated in a cascade of mutually commuting measurements on orthogonal subspaces in the SI space as illustrated schematically in Figure <ref> (a). The first term 𝒞(N,1-N) can be attained using quantum non-demolition (QND) photon counting measurement effected by the set of signal projectors {Π^S_0,Π^S_1}, where Π_0^S = |0^S⟩⟨0^S|, and Π_1^S = ∫ dω a^†_S(ω)|0^S⟩⟨0^S|a_S(ω). A QND measurement is advisable as destructive photon counting at this stage can only fetch as much as the information as the classical 𝒞(N,1-N) term, the collapsed photon states carrying no more quantum information. Practically, such QND photon counting has been achieved using either cross-Kerr mapping of photons numbers onto phase shifts of a secondary optical probe <cit.>, or by strongly coupling the photonic state to atoms in cavity electrodynamics that maps photon numbers to atomic phases, which can then be detected using interferemetric techniques <cit.>. The photon counting measurements, non-demolition or not, are effectively absorption measurements, and the magnitude of N can be estimated from these measurement outcomes The second term N𝒬(σ^I) can, in general, be attained by measuring (idler) projectors corresponding to the eigenvectors of the SLD for σ^I. A practical setup that can implement approximately optimal single-photon projectors as mode-resolved photon counting may be achieved using quantum pulse gating (QPG) techniques <cit.> for ultrafast pulses. This involves an incoherent train of pulses coupling with a sufficiently shaped gating pulse in a sum-frequency interaction inside a nonlinear crystal. The shape of the gating pulse determines the mode the incoming pulse is effectively projected on to, presenting at the output as a higher frequency signal than the incoming pulse (see Figure <ref> (b)). The third term in Eq. (<ref>) may, in general require measurements entangled across the signal and idler on the pure state |Φ_biph,out⟩ to be attained. In the next section, we show that an unentangled measurement suffices. § 1-LOCC DETECTION SCHEMES Measurement protocols for multipartite quantum systems can be divided into three classes <cit.> — (a) uncorrelated local measurements (LM) with no classical communication between individual substations, (b) correlated local operations and classical communication (LOCC) where results of local measurement operations may be conveyed back and forth between the various substations using classical bits, and (c) global measurements (GM) which are the most general class of measurements that can be performed on multipartite quantum systems. In terms of their ability to extract quantum information and resource intensiveness of practical implementation <cit.>, LM⊆LOCC⊆GM. In our spectroscopic setup, entangled measurements across the signal and the idler would be in GM, but not LM or LOCC. We show that such entangled measurements are not necessary to attain the third term in Eq. (<ref>). In fact, we show that a one-way idler-to-signal LOCC measurement scheme — that we will henceforth refer to as “1-LOCC”, always attains the third term in Eq. (<ref>). For the biphoton setup, the most general 1-LOCC is schematically illustrated in Figure <ref>. In such a detection scheme, the results of local measurement on the idler substation are classically communicated on to the signal substation, where then measurement operators for local detection are chosen accordingly[Signal-to-idler 1-LOCC detection schemes can be constructed in similar fashion, but with some important differences. See Appendix <ref>. Signal-to-idler schemes may be more challenging to implement practically as the preparation step must necessarily follow the light-matter interaction, outcomes of which must subsequently be communicated to the idler substation.]. This is another of our main results. Experimentally, LOCC operations on continuous-variable (CV) time-frequency entangled states have been successfully implemented as part of CV teleportation of light states <cit.>. Our 1-LOCC detection scheme is thus a potentially attractive class of measurements for biphoton spectroscopy, and must be contrasted against interferometric quantum spectroscopies that propose global measurements by bringing together the two photons in linear <cit.> or non-linear interferometers <cit.> at the detection stage. Operationally, the idler-to-signal 1-LOCC detection scheme can also be viewed as a heralding scheme where a measurement is performed on the idler photon, and outcomes communicated to the signal station independently of the light-matter interaction which has support on the MSE subspace. We construct a spectroscopically useful subclass of 1-LOCC detection schemes by optimising the CFI over POVMs on the signal mode only. We call this the “measurement-optimal” 1-LOCC detection scheme, and show that it (i) always includes a measurement whose CFI equals 𝒬(θ; |Φ_biph,out(θ)⟩) in a specific choice of the 1-LOCC scheme, and (ii) the associated CFI for all members exceeds that of any single-photon measurement on the reduced signal state only. For perfect coupling geometries (Γ_⊥ = 0), (i) implies that entanglement in the incoming biphoton state is not, in-principle, a resource. This is because the QFI of an entangled biphoton state may be attained with a suitably heralded Fock state and uncorrelated LM measurement. §.§ 1-LOCC Fisher Information in Biphoton Setup The most general idler-to-signal 1-LOCC detection scheme proceeds in the following three steps: * Projectively measure the idler photon in the basis {V|ξ_x^I⟩⟨ξ_x^I|V^†}, where V is an arbitrary unitary operator on the idler Hilbert space. This transforms the incoming entangled biphoton probe state via Kraus operators Π_x^I = 1^S⊗ V|ξ_x^I⟩⟨ξ_x^I| V^† to ρ'[V] = ∑_x ( Π_x^I)^† |Φ_biph⟩⟨Φ_biph| Π_x^I = ∑_x |ψ_x⟩⟨ψ_x|⊗ V|ξ_x^I⟩⟨ξ_x^I|V^†, where |ψ_x⟩ = ∑_n r_n V^*_nx|ξ_n^S⟩, and V_mn = ⟨ξ_m^I|V|ξ_n^I⟩ are elements of unitary matrix V in the idler Schmidt basis. * Communicate (classically) the outcome of projective measurement {Π_x^I} to the signal substation. The M-S-E interaction, given by the CPTP map 𝒲_g in Eq. (<ref>), transforms the signal Schmidt basis {|ξ_n^S⟩} onto the non-orthonorgonal set[The preparation step (characterised by Kraus operators {Π_x^I}) quantum operation and the M-S-E interaction (characterised by superoperator 𝒲_g) commute with each other, and can be applied in any order. If, in the final step, the signal subensembles {|ζ_x⟩} are all projected onto a common measurement basis, the LOCC scheme reduces to an LM scheme with independent measurements performed at signal and idler substations.] {|ϕ_n^S⟩}, and renormalises the outgoing state as in Eq. (<ref>). The resulting SI state is given by ρ'_out[V] = ∑_x |ζ_x⟩⟨ζ_x|⊗ V|ξ_x^I⟩⟨ξ_x^I|V^†, where |ζ_x⟩ = 1/√(1-N)∑_n r_n V^*_nx|ϕ_n^S⟩, is the unnormalised conditional signal state for the outcome x. * Measure the signal photon using operators {Π^S_y|x=x_m} depending on the communication x=x_m received from the idler. These stages of the 1-LOCC scheme are illustrated in Figure <ref>. The maximum CFI attainable using a 1-LOCC scheme is formally given by max_Π_y|x^S,Π_x^I 𝒞(θ | {Π_y,x = Π_y|x^S⊗Π_x^I})    = max_Π_x^I (max_Π_y|x^S 𝒞(θ | {Π_y,x = Π_y|x^S⊗Π_x^I})), s.t. ∑_x Π_x^I = 1^I, ∑_yΠ_y|x^S = 1^S ∀ x. The maximal 1-LOCC CFI is upper bounded as max_Π_y|x^S,Π_x^I 𝒞(θ | {Π_y,x = Π_y|x^S⊗Π_x^I}) ≤𝒬(θ;|Φ_biph,out⟩). since 1-LOCC⊆LOCC⊆GM. The maximisation of the CFI functional may now proceed in two steps, following the RHS of Eq. (<ref>): first, for a given unitary V, the CFI is maximised over all allowed {Π^S_y|x=x_m}, and second, the resulting quantity is maximised over all Π_x^I = 1^S⊗ V|ξ_x^I⟩⟨ξ_x^I| V^†, which amounts to a maximisation over all unitary operations V. All 1-LOCC for which maximisation over signal POVM {Π^S_y|x=x_m} has been performed will be termed “measurement-optimal", and the corresponding CFI quantity, now just a function of the preparation unitary V, is given as 𝒞_max(θ;V) = max_Π_y|x^S 𝒞(θ | {Π_y,x = Π_y|x^S⊗Π_x^I}). Constructing the orthogonal complement of |Φ_biph,out⟩ in the two-dimensional Span[|Φ_biph,out⟩,|∂_θΦ_biph,out⟩] as <cit.> |Φ_biph,out^⊥⟩≡(1-|Φ_biph,out⟩⟨Φ_biph,out|) |∂_θΦ_biph,out⟩, the following result holds: For a preparation step unitary V_0 that satisfies ⟨ξ_m^I|V_0^† Tr_S |Φ_biph,out⟩⟨Φ_biph,out^⊥| V_0|ξ_m^I⟩ = 0 ∀ m, 𝒞_max(θ;V_0) = 𝒬(θ;|Φ_biph,out⟩). A proof appears in Appendix <ref>. A constructive proof for the existence of a V_0 satisfying Eq. (<ref>) has been established for finite dimensions <cit.>. It can be extended to trace-class (and hence bounded and compact) operators on CV spaces, including Tr_S |Φ_biph,out⟩⟨Φ_biph,out^⊥| in Eq. (<ref>). Following the upper bound in Eq. (<ref>), the measurement-optimal 1-LOCC characterised by V_0 must correspond to the maximal CFI attainable. Thus, the biphoton component of the QFI in Eq. (<ref>) may be attained in a measurement-optimal 1-LOCC scheme with V_opt = V_0 — that is, an unentangled measurement. The unitary V_0, in general, depends on the the outgoing signal modes {ϕ_n^S}, which themselves change with the nature and strength of M-S and M-E interactions. §.§ No advantage from entangled input probe If Γ_⊥=0, only the third term in Eq. (<ref>) survives. Then there always exists a single-photon Fock state |ζ^opt,'_x_m⟩ = 1/√(⟨ζ^opt_x_m|ζ_x_m^opt⟩)|ζ_x_m^opt⟩, where |ζ^opt_x_m⟩ = 1/√(1-N) ∑_n r_n (V_opt)^*_nx_m |ϕ_n^S⟩ for some measurement outcome x=x_m, which has at least as much QFI as the entangled input in Eq. (<ref>). In principle, time-frequency entanglement of the input thus provides no advantage in this scenario. This follows from Theorem <ref>, whereby the biphoton QFI can be written as the convex combination of signal-only QFIs as (see Eqs. (<ref>) and (<ref>) in Appendix <ref>) 𝒬(θ;|Φ_biph,out⟩) = ∑_x ⟨ζ_x^opt|ζ_x^opt⟩ 𝒬(θ;|ζ_x^opt,'⟩) where |ζ_x^opt,'⟩ = 1/√(⟨ζ_x^opt|ζ_x^opt⟩)|ζ_x^opt⟩ are normalised conditional states. Equivalently, the biphoton QFI is always equal to the QFI of the following separable state 𝒬(θ;|Φ_biph,out⟩)          =𝒬( θ;∑_x |ζ^opt_x⟩⟨ζ_x^opt|⊗ V_opt|ξ_x^I⟩⟨ξ_x^I|V^†_opt). This shows that it is always possible to engineer the incoming state of light so as to prepare deterministically the product state component in the convex sum in Eq. (<ref>) with the maximal QFI max_x 𝒬(θ;|ζ^opt,'_x⟩) (which we will label by index x=x_m) so that the QFI of the separable state then yields at least as much precision as the entangled biphoton state. Operationally, one need only start then with the (pre-conditioned) single-photon signal state |ψ_x_m^opt⟩ = 1/√(1-N) ∑_n (V_opt)^*_nx_m |ξ_n^S⟩, which would yield the outgoing state |ζ^opt,'_x_m⟩ whose QFI is always greater than, or equal to, the biphoton QFI 𝒬(θ;|Φ_biph,out⟩). This subsection extends a similar conclusion in Ref. <cit.> for the restricted case of resonant Γ-estimation in TLS for Γ_⊥ = 0 to an arbitrary Hamiltonian parameter θ. Our conclusion that an entangled input is not, in-principle, a resource can also be extended to scenarios with Γ_⊥≠ 0 when the biphoton state |Φ_biph,out⟩ is post-selected, because, in that case the first two terms in Eq. (<ref>) drop out. The question of whether an entangled input is advantageous when all the three terms in Eq. (<ref>) contribute, however, remains open. §.§ Lower Bound on Measurement-Optimal Protocols For any measurement-optimal 1-LOCC CFI, 𝒞_max(θ;V) = 𝒬(θ;ρ_out'[V]) ≥𝒬(θ;∑_x |ζ_x⟩⟨ζ_x|) = 𝒬(θ;1/1-N∑_m |r_m|^2|ϕ_m^S⟩⟨ϕ_m^S|) = 𝒬( θ;Tr_I |Φ_biph,out⟩ ), where the first line is true because maximisation of the CFI over {Π^S_y|x=x_m} in Eq. (<ref>) is precisely the maximisation that yields the Cramér-Rao bound <cit.> for the conditional state ρ_out'[V] (see Eq. (<ref>) Appendix <ref>). The second line is a consequence of the extended convexity of the QFI <cit.>. The inequality is saturated iff ⟨ϕ_m^S|∂_θϕ_n^S⟩ = 0 ∀ m,n. For a Schmidt basis {|ϕ_n^S⟩} that is complete on the signal Hilbert space, this is never true, and we get the stronger inequality 𝒞_max(θ;V) > 𝒬(θ;Tr_I |Φ_biph,out⟩). This shows that all measurement-optimal 1-LOCC detection schemes yield higher CFI than the QFI in signal photon obtained by tracing out the idler. Consequently, all measurement-optimal 1-LOCC detection schemes have a guaranteed metrological advantage over signal photon-only strategies. This leads to the hierarchy 𝒬(θ;|Φ_biph,out⟩) = 𝒞_max(θ;V_opt) ≥𝒞_max(θ;V)                         > 𝒬(θ;Tr_I |Φ_biph,out⟩). §.§ Parameter-Independent Unitary V = 1^I Unlike V = V_opt, V = 1^I is independent of θ and {ϕ_n^S}, and presents a simpler experimental scenario. The corresponding CFI is 𝒞_max(θ;V =1^I) = 4/1-N∑_n|r_n|^2⟨∂_θϕ_n^S|ϕ_n^S⟩ -4/(1-N)^2∑_n|r_n|^2|⟨ϕ_n^S|ϕ_n^S⟩|^2. For the special case of resonant Γ-estimation in a TLS with transition frequency ω_0, it has been shown that <cit.>) 𝒬(Γ;|Φ_biph,out⟩)|_Δ=0 = 𝒞_max(Γ;V=1^I); Δ = ω_0-ω̅_S. This implies that a 1-LOCC detection scheme with V = 1^I attains the QFI for Γ-estimation at Δ=0 in a TLS. While this conclusion no longer holds for the spectroscopy of more general systems and parameters, the measurement-optimal V = 1^I 1-LOCC detection scheme continues to be an improvement over simply tracing out the idler in the outgoing wavefunction, as per Eq. (<ref>). We study its efficacy in spectroscopy with a PDC input probe numerically in Section <ref>. § SPECTROSCOPY USING PDC LIGHT We now study pulsed quantum light spectroscopy using the experimentally ubiquitous entangled probes of weakly-downconverted PDC states <cit.> |Φ_PDC⟩≈1/N_PDC^1/2( |0⟩ + ∑_n=0^∞ r_n,PDC |h_n^S⟩|h_n^I⟩), where |h_n^S⟩ and |h_n^I⟩ are n-th Hermite-Gauss Schmidt basis modes for signal and idler spaces respectively, and r_n,PDC are corresponding Schmidt weights. The JTA function for the incoming PDC states may be obtained by inverting Eq. (<ref>) and recombining the Schmidt terms Φ_PDC(t_S,t_I) = ∑_n r_n,PDC h_n(t_S)h_n(t_I). The time-frequency entanglement of PDC states can be quantified using the entanglement entropy function S = -∑_n |r_n,PDC|^2log |r_n,PDC|^2, which is plotted as a function of PDC characteristics of entanglement time T_qent and pumpwidth σ_p (see <cit.> for definitions) in Figure  <ref>. We show that for these most practical of entangled probes, time-frequency entanglement provides a functional advantage in biphoton spectroscopy for asymptotically long detection times. We also establish that it is possible to get close to the fundamental limits set by the corresponding QCRB using unentangled measurements that are independent of the true value of the parameter. We thus provide a complete recipe for quantum-enhanced biphoton spectroscopy using PDC light probes and simple, unentangled measurements independent of the sample parameters. §.§ QFI Of Outgoing PDC State The outgoing S-I state for the PDC input in Eq. (<ref>) has a structure similar to Eq. (<ref>), but with a modified normalisation ρ_PDC,out^SI = ( 1-𝔫) |Φ_PDC,out⟩⟨Φ_PDC,out| + 𝔫 |0^S⟩⟨0^S|⊗σ^I, where σ^I is the same as in Eq. (<ref>), 𝔫 = N/N_PDC, |Φ_PDC,out⟩ = 1/(N_PDC(1-𝔫))^1/2( |0⟩ + ∑_n r_n,PDC|ϕ_n,PDC^S⟩|h_n^I⟩), with |ϕ_n,PDC^S⟩ = ∫_-∞^∞ dt_S( h_n(t_S) -Γ∫_-∞^t_Sdτ f_M(t_S-τ)h_n(t_S) ) a_S^†(t_S)|0^S⟩. The QFI of the outgoing PDC state has the familiar trinal contribution (cf. Eq. (<ref>)) 𝒬(θ;ρ_PDC,out^SI) =   𝒞(𝔫,1-𝔫) + 𝔫𝒬(θ;σ^I) + (1-𝔫) 𝒬(θ;|Φ_PDC,out⟩) where 𝒬(θ;|Φ_PDC,out⟩) = 4/N_PDC(1-𝔫)∑_n |r_n,PDC|^2 ⟨∂_θϕ_n,PDC^S|∂_θϕ_n,PDC^S⟩ - 4/(N_PDC(1-𝔫))^2|∑_n |r_n,PDC|^2 ⟨ϕ_n,PDC^S |∂_θϕ_n,PDC^S⟩|^2. The relative magnitudes of the three terms in the PDC QFI in Eq. (<ref>) admit the same pattern with respect to the ratio Γ_⊥/Γ as established for the biphoton QFI in Table <ref>, with the only modification being that all the Fisher informations in Eq. (<ref>) are scaled by the Γ_⊥/Γ-independent normalisation N_PDC. Also of interest is the CFI of the measurement-optimal 1-LOCC mediated by preparation step unitary V = 1^I for the PDC states (see Eq. (<ref>)) 𝒞_max(θ;V = 1^I) = 4/N_PDC(1-𝔫)∑_n |r_n,PDC|^2 ⟨∂_θϕ^S_n,PDC|∂_θϕ^S_n,PDC⟩ - 4/(N_PDC(1-𝔫))^2∑_n |r_n,PDC|^2 |⟨ϕ^S_n,PDC |∂_θϕ^S_n,PDC⟩|^2. Finally, some descriptions of the entangled PDC photons omit the vacuum term in Eq. (<ref>), yielding just the biphoton state |Φ_biph⟩ in Eq. (<ref>). This corresponds to the state post-selected for only successful detection of the two photons, and can be expressed in terms of the PDC JTA Φ_PDC(t_S,t_I) as |Φ_biph⟩ = 1/√(Λ) ∫ dt_S∫ dt_I Φ_PDC(t_S,t_I) a_S^†(t_S)a_I^†(t_I)|0⟩, where Λ = ∫ dt_S∫ dt_I Φ^*_PDC(t_S,t_I)Φ_PDC(t_S,t_I) ensures unit norm. The spectroscopic informations provided by states in Eqs. (<ref>) and (<ref>) are not, in general, a simple rescaling. Rather, 𝒬(θ;|Φ_PDC,out⟩)≈Λ 𝒬(θ;|Φ_biph,out⟩)    +4/Λ |∫ dt_S∫ dt_I ∂_θΦ_PDC,out(t_S,t_I)^* Φ_PDC,out(t_S,t_I) |^2, assuming N_PDC≈ 1 and Λ≪ 1. See Appendix <ref> for details. We finally specialise our study of entangled quantum light spectroscopy to specific matter systems: in Section <ref>, we address 1-site TLS (P=1 in Eq. (<ref>)) for which we will evaluate Fisher informations corresponding to pulse-matter coupling Γ, as well as level frequency ω_0; in Section <ref>, we address the 2-site CD systems (P=2 in Eq. (<ref>)), for which we will evaluate fundamental limits of inter-site coupling J estimation. As a first foray, we focus on the vibrationless Hamiltonian that does not include couplings to phonon baths. We highlight aspects of engineering the source of PDC probes for spectroscopy while yielding tangible quantum enhancements. We find larger time-frequency entanglement — concomitantly shorter entanglement times and pump bandwidths — to be beneficial. Indeed, more entanglement in the PDC probe yields more spectroscopic information. Parameter-independent (hence non-adaptive) unentangled detection, using the most entangled of PDC probes, also meaningfully outperform single-photon spectroscopies using the reduced signal state only. Typically, these simplified measurements capture between 60% - 90% of the spectroscopic information. §.§ TLS spectroscopy For the TLS, H_I^M = ħΔ|e⟩⟨e|, Δ=ω_0-ω̅_S is the detuning between the carrier signal and TLS frequency ω_0, and the characteristic response function for TLS takes the simple form f_TLS(t) = exp(-[Γ+Γ_⊥/2 + iΔ] t ). §.§.§ No Coupling to Environment (E): Γ_⊥ = 0 In this case, only the last term in Eq. (<ref>) contributes. This QFI is plotted, for a grid of values of classical pumpwidths σ_p and entanglement times T_qent, for the estimation of Γ in Figure <ref> (a), and for the ω_0-parameter in Figure <ref> (b). For both parameters, comparing Figs. <ref> and <ref> shows that more entanglement in the incoming PDC probe, as captured by entropy S defined in Eq. (<ref>), leads to a higher value for the outgoing QFI. To uncover the role of entanglement in the incoming probe PDC state in this spectroscopy task more clearly, we also display scatter plots of the Γ- and ω_0-QFIs as functions of the entanglement entropy S in Appendix <ref> (see Figure <ref> (a)-(b)). These show that that a more entangled PDC input always yields more outgoing QFI. However, the outgoing PDC QFI is not a one-to-one function of the incoming entanglement. Specifically, for the same amount of entanglement, incoming PDC states may yield outgoing states with different QFIs, depending on the experimental values of T_qent and σ_p. The apparent advantage conferred by time-frequency entanglement here is not in contradiction with our earlier conclusion that entanglement provides no in-principle advantage in biphoton spectroscopy. In Section <ref> we showed that the outgoing QFI corresponding to any incoming entangled state may be superseded by a suitably optimised product state in the S-I space. In contrast, entanglement-enhanced sensing in Figure <ref> is relative to PDC probe states only. For reference, in Figure <ref>, the ratios of outgoing QFI corresponding to the most entangled PDC state (bottom-left edge in either plot) to that of the least entangled (almost product) PDC state (top-right edge) are 𝒬(Γ;|Φ_PDC,out⟩) |_T_qent = 0.150 ps,σ_p = 50 cm^-1/𝒬(Γ;|Φ_PDC,out⟩) |_T_qent = 1.995 ps,σ_p = 180 cm^-1≈ 14.0391, and 𝒬(ω_0;|Φ_PDC,out⟩) |_T_qent = 0.150 ps,σ_p = 50 cm^-1/𝒬(ω_0;|Φ_PDC,out⟩) |_T_qent = 1.995 ps,σ_p = 180 cm^-1≈ 15.5665, showing that for TLS spectroscopy, there is significant advantage to engineering the source to produce more entangled states — within the class of PDC states. Our result also indicates that, for the parameter regime considered, using entangled photons enables more precise spectroscopy than heralded single photon states <cit.>. To further underscore the subtle role of entanglement, we consider a family of time-frequency pulse mode (TFM) states of the form <cit.> |Φ_TFM⟩ = 1/N_TFM^1/2 ( |0⟩ + α_pump/ħ |Φ^𝔱_TFM⟩), where |Φ^𝔱_TFM⟩ = cos𝔱|h_0^S⟩|h_1^I⟩ + sin𝔱|h_1^S⟩|h_0^I⟩,  0≤𝔱≤π, and α_pump/ħ = 0.01 (same as for |Φ_PDC⟩). Parametric plots of the QFI of the outgoing TFM state against the entanglement of the incoming state in Figure <ref> in Appendix <ref> show that for both TLS parameters, the maximal QFI in outgoing state corresponds to the product input |h_0^S⟩|h_1^I⟩. Thus, for this family of biphoton states, time-frequency mode entanglement is not a useful resource. As a final caveat, recall that our conclusions are for asymptotically large times. They may not hold in general for finite-time evolutions. As a matter of fact, an opposite behaviour was shown in a similar parameter region <cit.> for short detection times, where the problem essentially reduces to absorption estimation, and the perturbation induced on the signal photon wavefunction is not relevant. This further highlights the intricacies in understanding the role of entanglement in quantum light spectroscopy. To understand the measurements that attain a large fraction of the maximal spectroscopic information, we now define two ratios to capture the efficacy of 𝒞_max(θ;V = 1^I). They are motivated by the simpler parameter-independent V=1^I measurements enabling practical entangled light spectroscopy. The first is a “degree of optimality" defined as ϰ(θ) = 𝒞_max(θ;V=1^I)/𝒬(θ;|Φ_PDC,out⟩),  0 ≤ϰ(θ) ≤ 1. A value closer to unity indicates proximity to the fundamental precision afforded by appropriately constructed estimators set by the PDC QFI, which for Γ_⊥ = 0 is the biphoton QFI 𝒬(θ;|Φ_PDC,out(θ)⟩) only. The second is an “enhancement factor" defined as ς(θ) = 𝒞_max(θ;V=1^I)/𝒬(θ;Tr_I[|Φ_PDC,out⟩]),  ς(θ) > 1, where ς(θ)>1 indicates a spectroscopic advantage offered by 1-LOCC measurement with V=1^I over all single-photon strategies on the reduced signal-photon state Tr_I|Φ_PDC,out⟩. For completeness, 𝒬(θ;Tr_I|Φ_PDC,out⟩) = 𝒞_max(θ;V=1^I) - 1/(N_PDC(1-𝔫))^2∑_n>m16 |r_m|^2 |r_n|^2/|r_m|^2 + |r_n|^2|⟨ϕ_m,PDC^S|∂_θϕ_n,PDC^S⟩|^2. Figure <ref> displays the optimality ratio ϰ(θ) for the TLS parameter ω_0 when Δ=0. It shows that more that 80% of the QFI for a PDC probe is recovered by a parameter-independent 1-LOCC measurement. For resonant Γ estimation, ϰ(Γ)|_Δ=0 = 1 for all points on the grid <cit.>. As the magnitude of the detuning |Δ| increases, ⟨Φ_PDC,out | ∂_ΓΦ_PDC,out⟩|_Δ≠ 0≠ 0, and the degree of optimality drops below unity, as can be seen in Figure <ref> in Appendix <ref>. Lastly, Figure <ref> displays the enhancement factor ς(θ) for TLS parameters ω_0 and Γ. From Eq. (<ref>), we always expect ς(θ) > 1, which is substantiated in these plots. We also see that spectroscopy with only the most entangled of PDC states can meaningfully outperform single-photon spectroscopy using the reduced signal state Tr_I[|Φ_PDC,out⟩]. Interestingly, for ω_0-estimation, a more entangled state yields a larger ς(ω_0) as the detuning |Δ| increases, whereas the reverse is true for ς(Γ). For the effects of non-zero |Δ| on these quantities, see Appendix <ref>. This has practical consequences for spectroscopy using PDC probes, as one may resort to the even simpler setup of single photon probes if the enhancement offered by the 1-LOCC scheme, as measured by ς(θ) is not large enough. §.§.§ Coupling to Environment (E): Γ_⊥ > 0 For non-zero coupling to the environmental modes in E so that Γ_⊥>0, the outgoing two-photon QFI must now be evaluated using Eq. (<ref>) which includes contributions from both single- and two-photon terms, in addition to the term corresponding to classical mixing. Figure <ref> displays the outgoing Γ-QFI (for the same grid of values as Figure <ref> and Δ =0) for two representative values of M-E coupling. In panel (a), Γ_⊥ = 0.5Γ corresponds to comparable M-S and M-E couplings, whereas panel (b) corresponds to Γ_⊥ = 10.0Γ such that the matter-environment coupling is much stronger than coupling to the incoming signal mode. Corresponding results are shown in Figure <ref> for ω_0-QFI. The effect of non-zero detuning on TLS parameter QFIs in the presence of an environment is studied in Appendix <ref>. Even in the presence of an environment, Figures <ref> and <ref> show that time-frequency entanglement continues to be a useful resource for spectroscopy using PDC probes. The magnitudes of the QFI values for either TLS parameter are also diminished, an expected consequence of the M-E coupling which causes the matter sample to decay into the environmental modes, thus reducing the information content in the measured signal and idler modes. §.§ CD spectroscopy: J-estimation The Hamiltonian of a coupled dimer (CD) (P=2 in Eq. (<ref>)) is given by H^CD = ∑_j=a,b ħω_j|j⟩⟨j| + ħ(ω_a +ω_b)|f⟩⟨f| + J(|a⟩⟨b| + |b⟩⟨a|), where J is the coupling strength between the two sites a and b (see Figure <ref>). Transforming to an appropriately chosen interaction frame and diagonalising the matter-only part (details in Appendix <ref>), we can express the CD Hamiltonian in the delocalised excitonic basis as H^CD_I = ∑_i=α,βħΔ_i|i⟩⟨i| + ħ(Δ_α+Δ_β)|f⟩⟨f|, where Δ_i = ω_i - ω̅_S (i=α,β) are the detunings from the central signal pulse frequency of the singly-excited manifold (SEM) excitonic levels |α⟩ and |β⟩. The explicit form for the characteristic CD function f_CD(t) appears in Appendix <ref>. It can be used to evaluate (assuming no M-E coupling so that Γ_⊥ = 0) the fundamental limits on the J coupling parameter between the sites a and b, using Eq. (<ref>). In Figure <ref>, the J-QFI are plotted as heat maps, for identical ranges of σ_p and T_qent values as for entanglement entropy S in Figure <ref>, for signal carrier frequency ω̅_S resonant withe ω_α in panel (a), and ω_β in panel (b). Akin to TLS estimation, we find that a higher value of the entanglement entropy of the incoming PDC state, for the parameter ranges considered, yields a higher J-QFI. (see Figure <ref> (c)-(d) in Appendix <ref> for parametric 𝒬(J;|Φ_PDC,out⟩)-S plots.) This implies again that, within the set of PDC probes, more time-frequency entanglement enhances the spectroscopic performance of J-estimation. We also see that the values of J-QFI for ω̅_S = ω_α (so that the signal beam is resonant with the g-α transition) are smaller than that for the choice ω̅_S = ω_β (signal beam resonant with g-β transition). This can be attributed to our choice of the small absolute value of |ħ(ω_a-ω_b)| relative to ħω_a and ħω_b, meaning that the particular instance of the CD system that we are studying, for which 2|ω_a-ω_b|/(ω_a+ω_b) ≈ 0.11, is quite close to a homodimer for which the g-α transition is forbidden by the structure of the GSM-SEM dipole operator. Therefore, population transfer from |g⟩→|α⟩ for ω̅_S = ω_α is much smaller than for |g⟩→|β⟩ when ω̅_S = ω_β, which accounts for the smaller values of the J-QFI. Next, Figure <ref> displays the ratio ϰ(J) that captures the degree of optimality of the V=1^I measurement-optimal idler-to-signal scheme. For the central signal frequency ω̅_S set to either ω_α or ω_β, we see that the V=1^I idler-to-signal LOCC scheme recovers between 60%-90% of the QFI, especially for highly entangled states in the the bottom right corner. Finally, Figure <ref> displays the enhancement factor ς(J). For all values on the grid except for most highly entangled states with small values of σ_p and T_qent, ς(J) is close to unity. As for TLS spectroscopy, CD spectroscopy with only the most entangled of PDC states can meaningfully outperform single-photon spectroscopy using the reduced state of the signal photon only, Tr_I[|Φ_PDC,out⟩]. § CONCLUSIONS Our quantum-information theoretic analysis of single-molecule biphoton spectroscopy of arbitrary quantum systems provides a characterisation of all the spectroscopic information that exists and can, in principle, be extracted using a biphoton probe in the long-time regime, when the excitation induced by the input pulse in the matter system has decayed. This allows the design of simple unentangled measurements that can provide tangible quantum advantage in practice. We provide a detailed analysis of the theoretical and experimental utility of time-frequency entanglement in single-molecule biphoton spectroscopy. With the latter in mind, we compare the performance of biphoton spectroscopy to those with unentangled probes, especially single photons, and unentangled measurements. This reveals the subtle and intricate role entanglement can play in enhancing spectroscopic performance. Note added: A recent work <cit.> examined the usefulness of quantum entangled light vis-à-vis classical coherent pulses in a non-linear generalisation of the biphoton setup in Figure <ref>. It focuses on the absorption signal (hence small detection times) for ensemble systems, whereas we study fundamental limits of spectroscopic information for coherent single-molecule spectroscopies and asymptotically long detection times. We thank Elnaz Darsheshdar and Sourav Das for fruitful discussions and feedback on the manuscript. This work has been funded, in part, by an EPSRC New Horizons grant (EP/V04818X/1) and the UKRI (Reference Number: 10038209) under the UK Government's Horizon Europe Guarantee for the Research and Innovation Programme under agreement 101070700. AK was supported, in part, by a Chancellor's International Scholarship from the University of Warwick. Computing facilities were provided by the Scientific Computing Research Technology Platform of the University of Warwick. § EXPLICIT EXPRESSIONS FOR QFI OF |Φ_BIPH,OUT⟩ The derivative of |Φ_biph,out⟩ is |∂_θΦ_biph,out⟩ = -N_θ/2(1-N)^3/2 ∑_m r_m|ϕ_m^S⟩|ξ_m^I⟩ + 1/√(1-N) ∑_m r_m |∂_θϕ_m^S⟩|ξ_m^I⟩. Then we can get the two terms of the pure state QFI as ⟨∂_θΦ_biph,out|∂_θΦ_biph,out⟩ = N_θ^2/4(1-N)^2 + 1/1-N ∑_m |r_m|^2⟨∂_θϕ_m^S | ∂_θϕ_m^S⟩ - N_θ/(1-N)^2 ∑_m |r_m|^2 Re⟨ϕ_m^S|∂_θϕ_m^S⟩, and ⟨Φ_biph,out|∂_θΦ_biph,out⟩ = -N_θ/2(1-N)^2 ∑_m |r_m|^2 ⟨ϕ_m^S|ϕ_m^S⟩ + 1/1-N ∑_m |r_m|^2 ⟨ϕ_m^S|∂_θϕ_m^S⟩ = -N_θ/2(1-N) + 1/1-N ∑_m |r_m|^2 ⟨ϕ_m^S|∂_θϕ_m^S⟩ . The QFI, using Eq. (<ref>), is 𝒬(θ;|Φ_biph,out⟩) = N_θ^2/4(1-N)^2 + 1/1-N ∑_m |r_m|^2 ⟨∂_θϕ_m^S | ∂_θϕ_m^S⟩ - N_θ/(1-N)^2 ∑_m |r_m|^2 Re⟨ϕ_m^S|∂_θϕ_m^S⟩ - N_θ^2/4(1-N)^2 - 1/(1-N)^2| ∑_m |r_m|^2 ⟨ϕ_m^S|∂_θϕ_m^S⟩|^2 + N_θ/(1-N)^2∑_m |r_m|^2 Re⟨ϕ_m^S|∂_θϕ_m^S⟩ = 1/1-N ∑_m |r_m|^2 ⟨∂_θϕ_m^S | ∂_θϕ_m^S⟩ - 1/(1-N)^2| ∑_m |r_m|^2 ⟨ϕ_m^S|∂_θϕ_m^S⟩|^2. § RELATIVE MAGNITUDES OF QFI CONTRIBUTIONS IN TABLE <REF> §.§ Γ_⊥≪Γ We can calculate orders of the various terms in Eq. (<ref>) by letting M-E coupling strength Γ_⊥→0 while M-S coupling strength Γ remains finite in this limit. Expanding the matrix exponential in the characteristic response function of the molecule M, defined in Eq. (<ref>), we have f_M(t) ∝ O(1), in orders of Γ as well as Γ_⊥, which in turn means N∝ O(ΓΓ_⊥). The orders of the parametric N-derivative, N_θ = ∂ N/∂θ depend on the parameter of interest. For θ≡Γ, N_Γ∝ O(Γ_⊥), while for molecular Hamiltonian H_I^M parameters, it can be worked out that N_θ∝ O(ΓΓ_⊥). This then gives, for Γ parameter, 𝒞(N,1-N)∝ O(Γ_⊥^2/ΓΓ_⊥), meaning lim_ Γ_⊥/Γ→ 0 𝒞(N,1-N) = 0. For the molecular Hamiltonian parameters, we get similarly, 𝒞(N,1-N)∝ O(Γ^2Γ_⊥^2/ΓΓ_⊥), which again yields lim_ Γ_⊥/Γ→ 0 𝒞(N,1-N) = 0. Moving on to the conditional idler state, we get from the unity orders of f_M(t) that σ^I∝ O(1), which yields, for both Γ as well as molecular Hamiltonian parameters, L_θ∝ O(1), and N𝒬(Γ;σ^I)∝ O(ΓΓ_⊥), yielding a vanishing contribution in the limit of Γ_⊥/Γ→ 0. §.§ Γ_⊥≫Γ Next, we establish the vanishingly small contribution of the biphoton QFI term (1-N) 𝒬(θ;|Φ_biph,out⟩) in Eq. (<ref>) in the limit of Γ_⊥≫Γ. This limit can be interpreted, in turn, as Γ_⊥→ 0 for finite magnitudes of Γ_⊥, which gives |ϕ_n^S⟩→|ξ_n^S⟩, so that |∂_θϕ_n^S⟩→ 0. Further, utilising again the fact that f_M(t) ∝ O(1), we have N∝ O(ΓΓ_⊥). Putting these together, we have (1-N) 𝒬(θ;|Φ_biph,out⟩)→ 0, following from Eq. (<ref>). § SIGNAL-TO-IDLER 1-LOCC SCHEME This class of signal-to-idler 1-LOCC protocol proceeds in the following three steps (illustrated schematically in Figure <ref>): * (Prepare): Projectively measure the signal photon in arbitrary basis {W|ξ_x^S⟩⟨ξ_x^S|W^†}, where W is a unitary operator on signal Hilbert space, after the M-S interaction effected by superoperator 𝒲_g in Eq. (<ref>). This amounts to the following transformation of the entangled two-photon state, given by Kraus elements Π_x^S = W|ξ_x^S⟩⟨ξ_x^S|W^†⊗1^I, ρ_out'[W] = ∑_x Π_x^S,† |Φ_biph,out⟩⟨Φ_biph,out| Π_x^S = ∑_x W|ξ_x^S⟩⟨ξ_x^S|W^†⊗|ρ_x⟩⟨ρ_x|, |ρ_x⟩ = 1/√(1-N)∑_n r_n ⟨ξ_x^S|W^†|ϕ_n^S⟩ |ξ_n^I⟩, where |ρ_x⟩ are conditional states of the idler photon, the parametric dependence on the parameter θ entering through the coefficient ⟨ξ_x^S|W^†|ϕ_m^S⟩, as well as the θ-dependent quantity N. * The outcomes of the projective measurement {Π_x^S} is classically communicated to the idler substation. * (Measure):The idler photon ensemble is partitioned into sub-ensembles (labelled by the signal measurement outcome x=x_m) that are in the conditional states |ρ_x=x_m⟩⟨ρ_x=x_m|. These are now detected in measurement bases that depends on the preparation step outcome {Π^I_y|x=x_m}. The joint signal-to-idler 1-LOCC projector is then Π_x,y = Π_x^S⊗Π_y|x^I. The above prepare-and-measure 1-LOCC detection scheme is now one-way going signal-to-idler because the results of the signal measurement are communicated to the idler substation. An important point of difference from the idler-to-signal LOCC scheme, detailed in Section <ref>, is that the preparation-step quantum operation given by Kraus operators {Π_x^S} does not commute with the M-S interaction 𝒲_g as they happen in the same substation of the setup. It is meaningful then only to perform the preparation-step operation {Π_x^S} after the two-photon state has been encoded with information about the parameter θ. Analogous to the idler-to-signal LOCC scheme, we can find the optimal signal-to-idler LOCC measurements by maximizing the associated detection CFI in two steps — first over all idler measurement strategies for the subensembles {|ρ_x⟩} for a given W, and subsequently over all preparation-step signal unitary operators W. Formally, we can state the maximisation analogously to idler-to-signal 1-LOCC in Eq. (<ref>) as: max_Π_y|x^I,Π_x^S 𝒞(θ | {Π_x,y = Π_x^S⊗Π_y|x^I}) = max_Π_x^S (max_Π_y|x^I 𝒞(θ | {Π_x,y = Π_x^S⊗Π_y|x^I}))  s.t. ∑_x Π_x^S = 1^I, ∑_yΠ_y|x^S = 1^I ∀ x. For a fixed preparation step unitary W (which fixes Π_x^S), the maximal CFI of all measurement-step detection measurements on the idler photon is equal to the QFI of the intermediate state ρ_out'[W] as maximisation over {Π_y|x^I} is precisely the maximisation that yields the quantum Cramér-Rao bound <cit.> for the conditional state ρ_out'[W] (in analogy with the idler-to-signal measurement-optimal 1-LOCC): 𝒞_max(θ;W) = 𝒬(θ;ρ_out'[W]) = 𝒬(θ;∑_x W|ξ_x^S⟩⟨ξ_x^S|W^†⊗|ρ_x⟩⟨ρ_x|). where 𝒞_max(θ;W) is the CFI of measurement-optimal LOCC strategy for a given W, defined as the LOCC protocol that has maximal CFI for fixed W. By renormalizing the conditional states |ρ_x'⟩ = 1/√(⟨ρ_x|ρ_x⟩)|ρ_x⟩ in order to express ρ_out'[W] in its spectral form, we can use Eq. (<ref>) to evaluate the measurement-step-maximal CFI explicitly, 𝒞_max(θ;W) = ∑_x (∂_θ⟨ρ_x|ρ_x⟩)^2/⟨ρ_x|ρ_x⟩ + 4/1-N ∑_m |r_m|^2 ⟨∂_θϕ^S_m|∂_θϕ^S_m⟩ - ∑_x ⟨ρ_x|ρ_x⟩ | ⟨ρ_x'|∂_θρ_x'⟩|^2 = 4/1-N ∑_m |r_m|^2 ⟨∂_θϕ^S_m|∂_θϕ^S_m⟩ - 4/(1-N)^2∑_x1/⟨ρ_x|ρ_x⟩ ( Im⟨ρ_x|∂_θρ_x⟩^2 - Re⟨ρ_x|∂_θρ_x⟩^2 ) where ⟨ρ_x|ρ_x⟩ = 1/1-N∑_m |r_m|^2 |⟨ξ_x^S|W^†|ϕ_m^S⟩|^2 has parametric dependence on θ, just as for the idler-to-signal scheme. The overlap can be expressed in the more succint form ⟨ρ_x|∂_θρ_x⟩ = 1/1-N⟨ξ_x^S|W^† Y W|ξ_x^S⟩ where Y(θ) = ∑_m, n |r_n|^2 ⟨ϕ_m^S|∂_θϕ_n^S⟩ |ϕ_m^S⟩⟨ϕ_n^S| is an operator on the signal Hilbert space. The measurement-optimal CFI for signal-to-idler LOCC scheme for given W then has the following form: 𝒞_max(θ;W) = 4/1-N ∑_m |r_m|^2 ⟨∂_θϕ^S_m|∂_θϕ^S_m⟩ - 4/(1-N)^2∑_x 1/∑_m |r_m|^2 |⟨ξ_x^S|W^†|ϕ_m^S⟩|^2 ( Im(⟨ξ_x^S|W^† Y W|ξ_x^S⟩)^2 - Re(⟨ξ_x^S|W^† Y W|ξ_x^S⟩)^2 ) Note both the similarity of expressions to the idler-to-signal LOCC case in Eq. (<ref>), as well as the differences – the expressions for measurement-step-maximal CFI are not the same because the biphoton setup is not symmetrical, as we have remarked before. We can now similarly obtain the overall optimal signal-to-idler LOCC measurement that fetches the maximal value of measurement CFI for any W. This is done by maximizing the functional 𝒞_max(θ;W) over the set of all unitary matrices W, which amounts to a minimisation of the second term in Eq. (<ref>). Defining the cost function as ϖ(θ;W) = 4/(1-N)^2∑_x 1/∑_m |r_m|^2 |⟨ξ_x^S|W^†|ϕ_m^S⟩|^2 ( Im(⟨ξ_x^S|W^† Y W|ξ_x^S⟩)^2 - Re(⟨ξ_x^S|W^† Y W|ξ_x^S⟩)^2 ) , where ϖ(θ;W)≥0 (which follows from 𝒬(θ;ρ_out'[W])≥0), and the optimal unitary transformation W_opt is defined as: W_opt : ϖ(θ;W_opt) ≤ϖ(θ;W) ∀  W s.t. W^†W = 1^S. The overall maximal CFI for idler-to-signal prepare-and-measure LOCC is then 𝒞_S→I(θ) = 4/1-N ∑_m |r_m|^2 ⟨∂_θϕ_m^S|∂_θϕ_m^S⟩ - ϖ(θ;W_opt). §.§.§ Lower Bound Just like the idler-to-signal scenario, the measurement-optimal CFI 𝒞_max(θ;W) can be lower bounded using the convexity of the quantum Fisher information: 𝒞_max(θ;W) = 𝒬(θ;ρ_out'[W]) = 𝒬(θ;∑_x W|ξ_x^S⟩⟨ξ_x^S|W^†⊗|ρ_x⟩⟨ρ_x|) ≥𝒬(θ;Tr_Iρ_out'[W]) = 𝒬(θ;∑_x ⟨ξ_x^S| W^† Tr_I|Φ_biph,out⟩⟨Φ_biph,out| W|ξ_x^S⟩ W|ξ_x^S⟩⟨ξ_x^S|W^†) = 𝒞(θ;⟨ξ_x^S|W^† Tr_I|Φ_biph,out⟩⟨Φ_biph,out|W|ξ_x^S⟩) ≥𝒬(θ;Tr_I|Φ_biph,out⟩) which is identical to the lower bound in Eq. (<ref>) for the idler-to-signal protocol QFI. The second inequality in the above chain may always be saturated for single-parameter estimation with an appropriate choice for W. By the same reasoning as the idler-to-signal case, the first inequality is saturated iff ⟨ϕ_m^S|∂_θϕ_n^S⟩ = 0 ∀ m,n (see Eq. (<ref>)). For a Schmidt basis {|ϕ_n^S⟩} that is complete on the signal Hilbert space, this is never satisfied, and we get the stronger inequality 𝒞_max(θ;W) > 𝒬(θ;Tr_I|Φ_biph,out⟩). This then leads to the following hierarchy of Fisher informations: 𝒬(θ;|Φ_biph,out⟩) ≥𝒞_S→I(θ) ≥𝒞_max(θ;W) > 𝒬(θ;Tr_I|Φ_biph,out⟩). Therefore, the reduced signal state QFI 𝒬(θ;Tr_I|Φ_biph,out⟩) serves as a useful benchmark for the effectiveness of LOCC parameter estimation using entangled light. For the signal-to-idler scheme, practical difficulties for a setup requiring the preparation step to necessarily follow the light-matter interaction make this LOCC scheme much less attractive as a means for enhanced θ-estimation compared to idler-to-signal scheme, where the idler photon is conditioned by preparation POVMs independent of interaction with the sample and subsequent measurement in the signal mode. § OPTIMAL 1-LOCC MEASUREMENT PROJECTORS FOR BIPHOTON SETUP As indicated by the RHS of Eq. (<ref>), the maximisation of the 1-LOCC CFI function can proceed in two steps: first, for a fixed unitary transformation V, we maximise over all signal POVMs {Π^S_y|x=x_m}; in the second step, the resulting quantity is then maximised over all choices of unitary preparation V. §.§ Optimisation over signal POVM {Π^S_y|x=x_m} For a fixed V, maximisation of the CFI over {Π^S_y|x=x_m} is precisely the maximisation that yields the Cramér-Rao bound <cit.> for the conditional state ρ_out'[V], meaning that 𝒞_max(θ;V) = 𝒬(θ;ρ_out'[V]). Abbreviating normalised conditional states |ζ_x'⟩ = 1/√(⟨ζ_x|ζ_x⟩)|ζ_x⟩, and using Eq. (<ref>), 𝒞_max(θ;V) = ∑_x ⟨ζ_x|ζ_x⟩ 𝒬(θ;|ζ_x'⟩⟨ζ_x'|) + 𝒞(θ|{⟨ζ_x|ζ_x⟩}), where ∑_x ⟨ζ_x|ζ_x⟩ 𝒬(θ;|ζ_x'⟩⟨ζ_x'|) = 4/1-N ∑_m |r_m|^2 ⟨∂_θϕ^S_m|∂_θϕ^S_m⟩ - 1/(1-N)^2∑_x 4/⟨ζ_x|ζ_x⟩ | ⟨ξ_x^I|V^†X(θ)V|ξ_x^I⟩|^2, and X(θ) = ∑_m,n r_m r_n^* ⟨ϕ_n^S|∂_θϕ_m^S⟩ |ξ_m^I⟩⟨ξ_n^I| is an operator on the idler I space. The one other orthonormal basis vector (besides |Φ_biph,out⟩) in the two-dimensional Span[|Φ_biph,out⟩,|∂_θΦ_biph,out⟩] can be constructed as <cit.> |Φ_biph,out^⊥⟩≡(1-|Φ_biph,out⟩⟨Φ_biph,out|) |∂_θΦ_biph,out⟩, in terms of which X = (1-N) Tr_S |Φ_biph,out⟩⟨Φ_biph,out^⊥| + Tr(X) Tr_S |Φ_biph,out⟩⟨Φ_biph,out|, where Tr(X) = [(1-N)⟨Φ_biph,out|∂_θΦ_biph,out⟩ - N_θ/2]. The relation in Eq. (<ref>) can be obtained using Eq. (<ref>), and noting that Tr_S |∂_θΦ_biph,out⟩⟨Φ_biph,out| = N_θ/2(1-N) Tr_S |Φ_biph,out⟩⟨Φ_biph,out| + X/1-N. Finally, we can also recast Eq. (<ref>) in terms of X: 𝒬(θ;|Φ_biph,out⟩) = 4/1-N ∑_n |r_n|^2 ⟨ϕ_n^S|ϕ_n^S⟩ - 4/(1-N)^2|Tr(X)|^2, where we have employed the explicit relation Tr(X) = ∑_n |r_n|^2 ⟨ϕ_n^S|∂_θϕ_n^S⟩. §.§ Optimal Choice of Preparation Unitary V Eq. (<ref>) is the Fisher information corresponding to the optimal 1-LOCC θ-estimation strategy for a given unitary V, obtained by maximising the CFI functional over the set of measurement POVMs {Π_y|x^S} acting on the sub-ensemble states of the signal photon {|ζ_x'⟩}. To identify the optimal 1-LOCC detection scheme that fetches the maximum CFI, we now proceed to maximise 𝒞_max(θ;V) over all V. As only the second term on the RHS of Eq. (<ref>) depends on V, we can see that the maximisation is equivalent to minimisation of the following cost function: ϑ(θ;V) = 1/(1-N)^2∑_x4/⟨ζ_x|ζ_x⟩ | ⟨ξ_x^I|V^†X(θ)V|ξ_x^I⟩|^2 - 𝒞(θ;{⟨ζ_x|ζ_x⟩}). Using the inequality chain 𝒞_max(θ;V) ≤𝒞_max(θ;V_0) < 𝒬(θ,|Φ_biph,out⟩), we have for the cost function ϑ(θ;V) ≥ϑ(θ;V_opt) > 4/(1-N)^2 |Tr(X)|^2, where V_opt is the optimal unitary, defined formally as V_opt : ϑ(θ;V_opt) ≤ϑ(θ;V) ∀  V s.t. V^†V = 1^I, and the second inequality in Eq. (<ref>) follows from Eq. (<ref>). Our strategy in the following will be to construct a preparation unitary that saturates the latter of the inequalities in Eq. (<ref>). While the existence of such a unitary (and hence a 1-LOCC detection) is not guaranteed for general multipartite scenarios, showing that there exists such a unitary preparation for which the inequality is saturated is a sufficient condition for maximisation. This follows from the fact that measurement CFIs can never exceed the QFI. §.§ Proof of Theorem <ref> For the traceless (in I space) compact bounded operator Tr_S |Φ_biph,out⟩⟨Φ_biph,out^⊥|, we can always construct <cit.> a preparation-step unitary V_0 such that ⟨ξ_m^I|V_0^† Tr_S |Φ_biph,out⟩⟨Φ_biph,out^⊥| V_0|ξ_m^I⟩ = 0 ∀ m, A short calculation then reveals that (which we will establish separately below) ϑ(θ;V_0) = 1/(1-N)^2|Tr(X)|^2, from which 𝒞_max(θ,V_0) = 𝒬(θ;|Φ_biph,out⟩) follows, establishing Theorem <ref>. Thus for all unitary preparations V_0 that admit the condition in Eq. (<ref>), the corresponding measurement-optimal 1-LOCC saturates the utimate quantum Crámer-Rao bound. §.§.§ V=V_0 saturates cost function ϑ(θ;V) bound In order to establish the relation in Eq. (<ref>), we first note that ⟨ξ_x| V_0^† X V_0 |ξ_x ⟩ = ⟨ξ_x | V_0^† (1-N)Tr_S |Φ^⊥_biph,out⟩⟨Φ_biph,out|V_0|ξ_x⟩ + Tr(X) ⟨ξ_x|V_0^†Tr_S |Φ_biph,out⟩⟨Φ_biph,out|V_0|ξ_x⟩ =Tr(X) ⟨ζ_x|ζ_x⟩. where we have used the optimality relation for V_0 in Eq. (<ref>) in the first line, and the second line utilises the relation ⟨ξ_x|V_0^† Tr_S |Φ_biph,out⟩⟨Φ_biph,out|V_0|ξ_x⟩ = ⟨ζ_x|ζ_x⟩ which can be easily worked out explicitly. The cost function in Eq. (<ref>) then becomes ϑ(θ;V_0) = 1/(1-N)^2 ∑_x 1/⟨ζ_x|ζ_x⟩ ⟨ζ_x|ζ_x⟩^2 |Tr(X)|^2 + 𝒞(θ;⟨ζ_x|ζ_x⟩) = 1/(1-N)^2 |Tr(X)|^2 ∑_x 1/1-N r_m^*r_n (V_0)_mx(V_0)_nx^* ⟨ϕ_m|ϕ_n⟩ + 𝒞(θ;⟨ζ_x|ζ_x⟩) = |Tr(X)|^2/(1-N)^2 + 𝒞(θ;⟨ζ_x|ζ_x⟩) where we have used the unitarity of the V_0 matrix : ∑_x (V_0)_mx(V_0)_nx^* = δ_mn. The second step will be to establish that the classical Fisher information 𝒞(θ;⟨ζ_x|ζ_x⟩) corresponding to mixing of the subensemble states {|ζ_x'⟩} vanishes for unitary V_0. In order to do this, we first look at the structure of the inner product ⟨ζ_x|∂_θζ_x⟩ = N_θ/2(1-N) ⟨ζ_x|ζ_x⟩ + 1/1-N ⟨ξ_x|V_0^†XV_0|ξ_x⟩ = (N_θ/2(1-N) + Tr(X)) ⟨ζ_x|ζ_x⟩ = ⟨Φ_biph,out|∂_θΦ_biph,out⟩ ⟨ζ_x|ζ_x⟩ where we have used the relation ⟨ξ_x|V_0^†XV_0|ξ_x⟩ = Tr(X) ⟨ζ_x|ζ_x⟩ in the first line, and Eq. (<ref>) in the second line. Now, keeping in mind that the outgoing biphoton state |Φ_biph,out⟩ is a normalised quantum state, we have ⟨Φ_biph,out|Φ_biph,out⟩ = 1, and thence Re ⟨Φ_biph,out|∂_θΦ_biph,out⟩ = 0. This then gives 𝒞(θ;{⟨ζ_x|ζ_x}) = ∑_x 4 ⟨ζ_x|∂_θζ_x⟩^2/⟨ζ_x|ζ_x⟩ = ∑_x 4 ⟨ζ_x|ζ_x⟩ Re⟨Φ_biph,out|∂_θΦ_biph,out⟩^2 = 0, which proves Eq. (<ref>). Briefly, we also note that a similar result was established recently for multipartite pure and rank 2 states in finite-dimensional Hilbert spaces <cit.>. However, the recipe can not be generalised to our infinite-dimensional CV case due to its dependence on the total system dimension. We have thus employed a different and more direct approach here that establishes existence of 1-LOCC schemes whose CFI is shown to equal the QFI for the outgoing state. § PARAMETRIC RELATION BETWEEN PDC ENTANGLEMENT AND OUTGOING QFI This appendix contains parametric plots depicting the relationship between entanglement of input time-frequency entangled states, as captured by entropy of entanglement defined in Eq. (<ref>), and outgoing biphoton QFI 𝒬(θ;|Φ_PDC,out⟩). Figure <ref> (a)-(b) shows these plots for TLS parameters, while (c)-(d) display the same relationship for CD parameter J, all for PDC input states in Eq. (<ref>). Figure <ref> shows analogous plots for TFM states defined in Eq. (<ref>). § TLS ESTIMATION USING PDC LIGHT WITH Δ>0 The following series of plots reproduce all quantities presented in Section <ref> for TLS parameter estimation using PDC light, but now for nonzero detuning between the central paraxial frequency ω̅_S, and TLS frequency ω_0. §.§ Perfect Coupling In Figures <ref> and <ref>, we see that the QFI admits the same trend with respect to σ_p and T_qent as Figure <ref> for either TLS paramters Γ or ω_0 as the magnitude of detuning |Δ| changes, with the only noticeable effect of the detuning manifesting in the diminished magnitudes for the QFIs. Next, in Figures <ref> and <ref> we plot the degree of optimality of measurement-optimal V = 1^I LOCC, ϰ(θ), for TLS parameters Γ and ω_0 respectively. Figure <ref> displays the departure from the forecasted value of unity for ϰ(Γ) when there is zero detuning between ω_0 and ω̅_S, which can be attributed to the increasing value of the quantity |⟨Φ_PDC,out|∂_ΓΦ_PDC,out⟩| as |Δ| increases. In contrast, Figure <ref> shows that for certain incoming PDC states (see upper left corner of the (σ_p,T_qent) grid) a higher magnitude of detuning can actually enhance the optimality of the measurement-optimal V = 1^I LOCC detection scheme. This shows that, even though the overall QFI values decrease with increasing |Δ| for either parameter (rendering therefore all possible estimators less precise), the effect of increasing detuning on the degree of optimality of measurement schemes is less certain, and may depend on the parameter of interest. A similar behaviour is observed in Figures <ref> and <ref> — for Γ-estimation using PDC photons, the enhancement ς(Γ) of V = 1^I over all single-mode schemes decreases as detuning increases (Figure <ref>), whereas for ω_0-estimation, a larger enhancement ς(ω_0) can be obtained using PDC states with signal photon far detuned from the TLS frequency ω_0 (Figure <ref>). §.§ Free Space Γ_⊥>0 We can also study the effect of non-zero detuning on TLS estimation for the free space scenario, characterised by non-zero coupling to E space Γ_⊥>0. In Figures <ref> and <ref>, we plot Γ-QFI for the same grid of PDC characteristics σ_p and T_qent as Figure <ref>. Again, the trend for QFI values against PDC chacteristics σ_p and T_qent remains unchanged, with moderately diminishing values for the QFI as detuning increases. A similar effect is seen for the TLS parameter ω_0 in Figures <ref> and <ref> for free space couplings Γ_⊥/Γ = 0.5 and Γ_⊥/Γ = 10.0 respectively. § LIGHT-CD INTERACTION IN EXCITONIC BASIS The coupled dimer (CD) system is comprised of two two-level systems, coupled to each other via an attractive Coulomb interaction, so that the bare molecular Hamiltonian is H^CD = ∑_j=a,bħω_j|j⟩⟨j| + ħ(ω_a+ω_b)|f⟩⟨f| + J (|a⟩⟨b| + |b⟩⟨a|), where J is the coupling strength between the two sites labelled a and b, whose excited levels are respectively |a⟩ and |b⟩, and |f⟩ is the doubly excited state, characterised by level energy ħ(ω_a+ω_b), obtained under the assumption of zero binding energy meaning that there is no interaction between the two excitations. This Hamiltonian is obtained by setting P=2 in the P-site Hamiltonian in Eq. (<ref>), and is a precursor to the far more complex Frenkel-Holstein Hamiltonians <cit.> that are used to model complex dynamics (including excitonic energy transport (EET) and long-lived quantum beats) in photosynthetic LHCs. In these more involved Hamiltonians, also included are site-dependent reorganisation energy terms, and interaction terms corresponding to phonon baths comprised of harmonic oscillators at each site whose displacement couples linearly with each excitation <cit.>. This is in addition to modelling each pigment site as a two-level system to account for the lowest transition, as well as interstitial coupling terms that appear in both the n-site and coupled dimer Hamiltonians in Eqs. (<ref>) and (<ref>) respectively. For an analytically tractable description of CD dynamics, we transform to the diagonal basis for the CD system, called the excitonic basis, through the eigendecomposition of the Hamiltonian in Eq. (<ref>), H^CD = ∑_j=α,βħω_j|j⟩⟨j| + ħ(ω_a+ω_b)|f⟩ where, effectively, we have diagonalised in the singly-excited manifold (SEM) space only (|g⟩ and |f⟩ states are already eigenstates of H^CD), yielding SEM eigenstates |α⟩ and |β⟩. The delocalised excitonic states can be explicitly related to the basis kets |a⟩ and |b⟩ by the following relations <cit.>, |α⟩ = cosΘ|a⟩ + sinΘ|b⟩, |β⟩ = -sinΘ|a⟩ + cosΘ|b⟩, where Θ = 1/2arctan(2J/δ) and δ = ħ(ω_a-ω_b), and the corresponding eigenvalues are ω_α = ω̅ - (δ/2)2Θ, ω_β = ω̅ + (δ/2)2Θ respectively for the |α⟩ and |β⟩ state. For reference, CD site and excitonic bases are visually represented as level diagrams in Figure <ref>. The transition dipole moment operator d⃗ of the CD system that couples with the incoming electric field has the following form in the site basis, d⃗ = ∑_i=a,bμ⃗_ig |i⟩⟨g| + ∑_i=a,bμ⃗_fi |f⟩⟨i| + h.c. which we can transform to the excitonic basis using Eq. (<ref>) where it has the analogous form, d⃗ = ∑_i=α,βμ⃗_ig |i⟩⟨g| + ∑_i=α,βμ⃗_fi |f⟩⟨i| + h.c. such that the various vector dipole elements transform via the following Θ-rotation matrix, [ μ⃗_α g; μ⃗_β g ] = [ cosΘ sinΘ; -sinΘ cosΘ ]  [ μ⃗_ag; μ⃗_bg ], [ μ⃗_fα; μ⃗_fβ ] = [ -sinΘ cosΘ; cosΘ sinΘ ]  [ μ⃗_fa; μ⃗_fb ]. As we will restrict our discussion to the use of pulses carrying single excitation to the CD system, and the CD system itself being in the ground state |g⟩ at t=0, the dipole operator elements μ⃗_fi that link the SEM and doubly excited level |f⟩ can be dropped altogether. We will then abbreviate the GSM-SEM transition dipoles for brevity as μ⃗_ig≡μ⃗_i = |μ⃗_i|μ̂_̂î, i=α,β for brevity. In order to describe the light-CD interaction, we transform to the interaction picture with respect to the zeroth-order Hamiltonian, H_0^CD = ∑_i=α,βħω̅_S|i⟩⟨i| + 2ħω̅_S|f⟩⟨f| + H^F, where ω̅_S is the central frequency of the signal mode, and the free field Hamiltonian is H^F = ∫ dω ħω a_S^†(ω)a_S(ω) + ∫ dω ħω b^†(ω)b(ω), where we have dropped the free field term corresponding to idler mode because of the explicit assumption that the CD system only interacts with a single mode in this input-output setup, and the operator b(t) corresponds to the environmental modes E. The interaction picture Hamiltonian is then H^CD(t) = H^CD_I - d⃗. E⃗(t) = ∑_i=α,βħΔ_i|i⟩⟨i| + ħ(Δ_α+Δ_β)|f⟩⟨f| -iħ [√(Γ) Σ^† a_S(t)⊗1^I⊗1^E + √(Γ_⊥) Σ^† 1^S⊗1^I⊗ b(t) - h.c] where Σ^† = √(ω_c μ_0^2/2ϵ_0 c A_0 ħ) (λ_ασ_+^α + λ_βσ_+^β) = √(2Γ) (λ_ασ_+^α + λ_βσ_+^β), where σ_+^i = |i⟩⟨g|,λ_i = (ê.μ̂_i) (|μ⃗_i|/μ_0) ( i=α,β) is the collective SEM raising operator, and H^CD_I = ∑_i=α,βħΔ_i|i⟩⟨i| + ħ(Δ_α+Δ_β)|f⟩⟨f| where Δ_i = ω_i - ω̅_S (i=α,β) are the detunings from the central signal pulse frequency of the excitonic levels, and a(t) are white-noise operators previously defined in Eq (<ref>) corresponding to the incoming paraxial mode. In obtaining these, we have re-centered the frequency integrals with the transformation ω→ω-ω̅_S so that the field operators a_S(ω) and b(ω) are centred around ω=0. The extension of the frequency integrals to all frequencies from -∞ to +∞ is a consequence of the fact that the CD system-quantum pulse interaction is assumed to be peaked around the central frequency of the oncoming pulse, allowing us to invoke the white noise approximation of SVEA. § CHARACTERISTIC MATTER FUNCTION FOR BARE CD HAMILTONIAN Closed expressions for the characteristic function f^CD(t_1) can be calculated explicitly for the bare CD Hamiltonian using the following formula for matrix exponential of a 2×2 matrix M = [ a b; c d ]∈𝐂^2×2 <cit.> e^M = e^a+d/2 [ coshυ + a-d/2 sinhυ/υ b sinhυ/υ; c sinhυ/υ coshυ - a-d/2 sinhυ/υ ]. where υ = (1/2)√((a-d)^2 + 4bc). For the bare CD Hamiltonian, the characteristic function can be expressed as the expectation value of the matrix exponential f_CD(t_1) = [ λ_α λ_β ] exp[ -i H^CD_I t_1/ħ - 1/2Σ^†Σ t_1] [ λ_α; λ_β ] which is then f_CD(t_1) = e^-iΔt_1-Γ t_1/2(λ_a^2+λ_b^2) { (λ_a^2 + λ_b^2)coshυ + sinhυ/υ[ (λ_α^2-λ_β^2)a-d/2 - 2λ_α^2λ_β^2 Γ t_1 ]} where Δ = (Δ_α+Δ_β)/2 is the averaged detuning, and the various terms (in terms of the CD parameters) are υ = t_1/2 √(-δ^2^2 2Θ/ħ^2 + Γ^2(λ_a^2+λ_b^2)^2 - 2iδΓ/ħ 2Θ (λ_α^2-λ_β^2) ) and a-d/2 = iδ t_1 2Θ/2ħ - Γ t_1/2(λ_α^2-λ_β^2). The QFI for the outgoing single photon state that scatters off of the CD system is proportional to a convolution of the incoming field envelope, and the parametric derivative of the characteristic function f_CD(t_1) which can also be calculated explicitly for the bare CD Hamiltonian for the J parameter, ∂/∂ J f_CD(t_1) = e^-iΔt_1-Γ t_1/2(λ_a^2+λ_b^2)[ ∂υ/∂ J { (λ_a^2+λ_b^2)sinhυ + ( coshυ/υ - sinhυ/υ^2)((λ_α^2-λ_β^2)a-d/2 - 2λ_α^2λ_β^2 Γ t_1 ) } + δ/δ^2+4J^2sinhυ/υ{ 4λ_αλ_βa-d/2 + (λ_α^2-λ_β^2)iδ t_1/ħ 2Θtan 2Θ + 2Γ t_1 λ_αλ_β(λ_α^2-λ_β^2) } ] where ∂υ/∂ J = δ/δ^2+4J^2 t_1^2/8υ { -4δ^2/ħ^2^2 2Θtan 2Θ - 4iδΓ/ħ 2Θtan 2Θ (λ_α^2-λ_β^2) - 8iδΓ/ħ 2Θλ_αλ_β}. § RELATION BETWEEN QFI OF PDC STATE AND POST-SELECTED BIPHOTON STATE The vacuum term in the two-photon PDC state in Eq. (<ref>) does not contribute to the detected signal, and is often dropped in theoretical analyses by post-selecting for the detected two-photon states only. The post-selected biphoton state, renormalised to ensure a unit norm, can be expressed in terms of the PDC JTA as: |Φ_biph⟩ = 1/√(Λ) ∫ dt_S∫ dt_I Φ_PDC(t_S,t_I) a_S^†(t_S)a_I^†(t_I)|0⟩ where Λ = ∫ dt_S∫ dt_I Φ_PDC^*(t_S,t_I)Φ_PDC(t_S,t_I) is the normalisation factor for the post-selected state. We can then work out the outgoing state corresponding to the biphoton state in Eq. (<ref>) for arbitrary matter systems with Hamiltonian H_I^M, also in terms of the PDC JTA: |Φ_biph,out⟩ = 1/√(Λ) ∫ dt_S∫ dt_I Φ_PDC,out(t_S,t_I) a_S^†(t_S)a_I^†(t_I)|0⟩ where Φ_PDC,out(t_S,t_I) = ∑_n r_n,PDC ϕ_n,PDC^S(t_S) h_n^I(t_I). This can then be used to calculate the corresponding QFI for parameter θ, 𝒬(θ;|Φ_biph,out⟩) = 4/Λ[∫ dt_S∫ dt_I ∂Φ_PDC,out(t_S,t_I)^*/∂θ ∂Φ_PDC,out(t_S,t_I)/∂θ - 1/Λ|∫ dt_S∫ dt_I∂Φ_PDC,out(t_S,t_I)^*/∂θΦ_PDC,out(t_S,t_I) |^2 ]. The outgoing PDC state QFI and the post-selected state QFI are then related to each other as 𝒬(θ;|Φ_biph,out⟩) = N_PDC/Λ𝒬(θ;|Φ_PDC,out⟩) + 4(Λ - 1)/Λ^2 |∫ dt_S∫ dt_I∂Φ_PDC,out(t_S,t_I)^*/∂θΦ_PDC,out(t_S,t_I) |^2. It is interesting to note the contrasting nontriviality of this relation (where the transformation between the two QFIs depends on the value of the true value of the parameter θ) vis-à-vis the transformation of the QFI function when the parameter is rescaled (in which case the QFI is rescaled by the square of the constant scaling factor 𝒬(θ/c;|ψ⟩⟨ψ|) = (1/c^2)𝒬(θ;|ψ⟩⟨ψ|). The biphoton normalisation factor Λ is proportional to the rate of entangled photon production given by (α_pump^2/ħ^2), which is typically a very small number given that the PDC process only converts between one in 10^6 to 10^11 pump photons into entangled daughter photons, depending on the particular nonlinear crystal used, and other experimental variables. In this text, for our choice of α_pump/ħ = 0.01, we can safely conclude that Λ≪1, and (by the same token) N_PDC≈1. This yields the simpler relation between the two QFIs, 𝒬(θ;|Φ_PDC,out⟩) ≈Λ/N_PDC𝒬(θ;|Φ_biph,out⟩) + 4/Λ N_PDC |∫ dt_S∫ dt_I∂Φ_PDC,out(t_S,t_I)^*/∂θΦ_PDC,out(t_S,t_I) |^2.
http://arxiv.org/abs/2307.00520v1
20230702090331
Discovery of a relation between the decay rate of the Sun's magnetic dipole and the growth rate of the following sunspot cycle: a new precursor for solar cycle prediction
[ "Priyansh Jaswal", "Chitradeep Saha", "Dibyendu Nandy" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage zkFi: Privacy-Preserving and Regulation Compliant Transactions using Zero Knowledge Proofs June 2023 Amit Chaudhary ======================================================================================================= Sunspots have been observed for over four centuries and the magnetic nature of sunspot cycles has been known for about a century: however, some of its underlying physics still remain elusive. It is known that the solar magnetic cycle involves a recycling of magnetic flux between the poloidal and toroidal components of the magnetic field, that manifests as the solar dipole and sunspots, respectively. Here we report the discovery of a new relationship between the rise rate of the sunspot cycle and the decay rate of the solar (axial) dipole moment. We argue that this points to the existence of a causal connection between the aforementioned physical quantities – providing an extension to the Waldmeier effect: namely, the decay rate of the Sun's dipole moment is related to the rate of rise and eventual amplitude of the following sunspot cycle. We demonstrate how one may take advantage of this new relationship to predict the amplitude and timing of the sunspot cycle. Our analysis indicates solar cycle 25 is going to be a weak-moderate cycle, peaking in 2024.00_-0.49^+0.68. Sun: activity – Sun: magnetic fields – Sun: interior § INTRODUCTION Our host star, the Sun, is a dynamic star whose magnetic activity varies across a wide range of timescales spanning from minutes to millennia and beyond <cit.>. The most prominent signature of this variability is captured by the waxing and waning of sunspots – dark, magnetized patches on the Sun's surface – that repeats almost every 11 years, known as the sunspot cycle. Sunspot cycles exhibit significant fluctuations in both amplitude and duration that occasionally result in extreme activity phases like solar grand minima and grand maxima <cit.>. The Sun's dynamic activity output influences the entirety of the heliosphere including our home planet, the Earth, by shaping its space environmental conditions and determining the habitability <cit.>. Therefore, developing accurate predictive capabilities pertaining to the long-term solar activity is crucial in planning future space missions and safeguarding space-reliant technologies <cit.>. Stripped down to its fundamental essence, the magnetic activities of the Sun originate in its deep interior, wherein, a magnetohydrodynamic dynamo action generates and recycles the Sun's large-scale magnetic fields <cit.>. The emergence of magnetic flux on the solar surface and its poleward migration under various flux-transport processes like supergranular diffusion, meridional circulation, etc. contribute to the gradual build up of global solar axial dipole moment (hereafter, dipole moment) <cit.>. It is evident from observations that the mean latitude of sunspot emergence drifts towards the equator with the progress of sunspot cycles <cit.>, thereby facilitating cross-equatorial diffusion of magnetic fluxes and their cancellation across the equatorial region. Recently, <cit.> demonstrated that the emergence of new sunspots during the decaying phase of a sunspot cycle do not have considerable influence on the polar field build up. In fact, earlier studies have detected plateau-like intervals in the dipole moment time series – showing no substantial changes in its magnitude for an extended duration of multiple years – during the descending phase of sunspot cycles <cit.>. On the other hand, meridional circulation, turbulent diffusion and turbulent magnetic pumping are believed to work in tandem to advect poloidal fields accumulated in the polar caps down into the base of solar convection zone (SCZ), where strong radial and latitudinal shear induct toroidal field that acts as a seed for the next sunspot cycle <cit.>. Generation of toroidal field in SCZ consumes the poloidal field of previous cycle. As a matter of fact, the solar dipole moment comes out of the plateau-like phase and starts decaying abruptly with almost a uniform rate. Besides, the toroidal fields produced at the base of SCZ become buoyantly unstable, rise up through the convection zone in the form of magnetic flux tubes and penetrates the solar surface – thereby producing sunspots of the new cycle. Decay and dispersal of these new sets of sunspots eventually lead to a growth in the Sun's poloidal field, but with opposite polarity as compared to the previous cycle (see Fig.<ref>, Top panel). This sequence of events indicates the existence of a causal connection between the decay of solar polar fields and dipole moment, and the rise of the following sunspot cycle. In fact it is widely known that steeply rising sunspot cycles peak to higher amplitudes and vice versa – known as the Waldmeier effect <cit.>. <cit.> found correlation between the decay rate of polar fields and the amplitude of the subsequent sunspot cycle across individual hemispheres of the Sun. However, it is to be noted that the decay of high-latitude polar field is almost concurrent with the ascent of the following sunspot cycle, leading to a narrow temporal window for solar cycle prediction. In this context, the dipole moment of the Sun has the potential to become a better precursor compared to the high-latitude polar field, where the former leads the latter by about a year as evidenced in observational data. <cit.> argued this time lag to originate from the delay induced by the poleward transport of low- and mid-latitude magnetic fields – during the formation of high-latitude polar fields. In this work, we investigate the relationship between the declining phase of the axial dipole moment associated with the solar cycle and the rise rate of the following sunspot cycle. We find a compelling relationship between the two. We argue that this is theoretically expected and points to a causal connection between the flux transport dynamics mediated dispersal of active region flux during the rise of a sunspot cycle and the cancellation of the polar field of the previous cycle. Furthermore, we demonstrate how this new relationship can be utilized to predict the future sunspot cycle, especially the timing of its peak which is a challenging task. Our results also support the Babcock-Leighton paradigm of the sunspot cycle which proposes that the decay and dispersal of the flux of tilted bipolar sunspot pairs mediated via surface flux transport processes is the primary mechanism for solar poloidal field's creation. § METHODS AND RESULTS We make use of total sunspot number database maintained by the SIDC-SILSO and the solar synoptic charts recorded at the Wilcox Solar Observatory (WSO), covering the information of photospheric solar magnetic activity since 1976 to 2023. For a given synoptic chart corresponding to a particular Carrington Rotation number associated with time t, global axial dipole moment of the Sun, D, at that instant can be formulated as, <cit.>, D(t) = 3/2∫^π_0B(θ, t) cosθsinθ dθ, where, B represents azimuthally averaged radial magnetic field of the Sun at colatitude θ. In the rising phase of a sunspot cycle the number of sunspots surges, accompanied by a fall in the magnitude of solar dipole moment until the latter reverses its global polarity (see Fig. <ref>, Middle and Bottom panels). This observation falls in line with the previously mentioned dynamo mechanism pertaining to the cyclic generation of poloidal and toroidal components of the Sun's large-scale magnetic field. Observations show that the polarity reversal of dipole moment precedes the occurrence of sunspot cycle peak by around a year. We hereby report the latest reversal in polarity of the solar dipole moment to have already occurred almost a year ago, during July 2022 – which anticipates an imminent cycle maximum of the ongoing sunspot cycle 25. Since, the growth of a sunspot cycle (say n) devours the precursor dipole moment of cycle (n-1), one would expect the time rates of these two physical processes to be in causal correlation with each other. To investigate this, we analyze the time series of the past four sunspot cycles (SC_21-24) and their corresponding precursor dipole moment cycles (D_20-23), by implementing linear regression over their growth and declining phases, respectively. We define, the growth phase of the sunspot cycle as the interval during which the sunspot numbers rise from the cycle minimum to the cycle maximum with the rate, r_SSN. On the other hand, we take a semi-analytical approach (prescribed in Appendix <ref>) to determine the decay intervals of individual dipole moment cycles, based on which we estimate their rate of decay, r_DM. We find these two dynamical quantities, namely r_SSN and r_DM strongly correlate with each other (Pearson's r=0.98 with 97.73% confidence level), as described in Fig. <ref>, and the correlation can be expressed as follows, r_SSN = 1.83 × r_DM - 19.17 Utilizing the observed rate of decay of dipole moment cycle D_24 (i.e., ∼26.1 gauss per year) in the empirical relationship prescribed above we estimate the rate of rise of the ongoing sunspot cycle 25 to be 28.5±4.7 sunspots per year – which is higher than that of the previous sunspot cycle 24 but lower than cycle 23 (see Table <ref>). We note that the outcome of the aforementioned regression is sensitive to the choice of initial epoch in the decay interval of dipole moment cycles and we discuss more on this in Appendix <ref>. Now we demonstrate how an amalgamation of this prior knowledge on the rise rate of a sunspot cycle, and its amplitude predicted by other independent means can be extended to forecasting the time of occurrence of its peak. Earlier studies have found that the magnitude of solar polar field and dipole moment at the sunspot cycle minimum significantly correlate with the strength of the subsequent sunspot cycle <cit.>. Fig. <ref> depicts that even the amplitude of the dipole moment, A_DM, has a significant correlation with the subsequent sunspot cycle amplitude, A_SSN, which can be expressed in the form of the following independent relationship, A_SSN = 2.00 × A_DM + 13.16 Substituting the observed amplitude of D_24 (=51.75 gauss) in equation (<ref>) we estimate the strength of the imminent sunspot cycle 25 maximum to be 116.91 ± 2.89 denoting a weak-moderate cycle similar to or slightly stronger than cycle 24. We mark the sunspot cycle minimum during December 2019 (say, t_25^i) with a monthly mean amplitude of 1.8 (say, A_25^i) as the beginning of the ongoing sunspot cycle 25. Ascribing a uniform average rise rate to this cycle (i.e. r_25 =28.5±4.7 sunspots per year) as estimated from equation (<ref>) and considering its amplitude (i.e. A_25^f =116.91 ± 2.89 predicted from equation (<ref>), we forecast the time of occurrence of the peak of sunspot cycle 25, t_25^f to be, t_25^f = t_25^i + A_25^f - A_25^ir_25 = 2024.00_-0.49^+0.68 § CONCLUSIONS Analyzing long-term observation of solar photospheric magnetic activity for the past four sunspot cycles, we discover a compelling correlation between the decay rate of solar dipole moment and the rise rate of following sunspot cycle. We have explained how this correlation emerges out of a causal connection between the emergence and surface flux transport of new tilted bipolar sunspot pairs (cause) and the decay and reversal of the previous cycle's poloidal field (effect). Given that this causal connection is intimately related to the Babcock-Leighton mechanism for solar polar field generation our work provides independent confirmation that this mechanism is an integral part of the solar dynamo. The rise rate of a sunspot cycle (say, cycle n) is known to be related to the eventual peak of that sunspot cycle (n) – a relationship known as the Waldmeier effect. Our work establishes an extension of this Waldmeier effect which can be succintly stated as: the rate of decay of the Sun's axial dipole moment of cycle (n - 1) is related to the rate of rise and the eventual strength of the following sunspot cycle (i.e., cycle n). Additionally, we formulate a semi-analytical framework to determine the decay time interval in dipole moment. It is worth noting that the evolution of the dipole moment precedes that of the solar polar field by nearly a year, which significantly extends the prediction window for the dynamics of the upcoming sunspot cycle with improved accuracy. The existence of such a strong correlation, in fact, enables one to forecast the timing of a sunspot cycle's peak once the amplitude of that cycle is independently anticipated. For example, we show that the ongoing sunspot cycle is likely to peak during January 2024 (with the range of July 2023 to September 2024), based on its empirically estimated amplitude of 116.91±2.89. Note that this estimated amplitude matches with the physical model based prediction of <cit.>. Predicting the time of maximum amplitude of sunspot cycle is important for gauging when the most adverse space environmental conditions (space weather) is expected. This information is important for solar radiative forcing of the Earth's upper atmosphere, in protection of space based technological assets and mission lifetime estimates. This prediction of the timing of the peak of sunspot cycles have remained a challenging task for physics based models. We have provided an alternative empirical method for predicting the timing of the sunspot cycle peak which can be implemented only after a significant fraction of the rising phase of sunspot cycle has occurred. The physical model based prediction of <cit.> predicted the peak to occur in 2024 (± 1 year). This convergence of our empirical prediction with early, physics based prediction augurs well for the field of solar cycle predictions. § ACKNOWLEDGEMENTS C.S. acknowledges fellowship from CSIR through grant no. 09/921(0334)/2020-EMR-I. CESSI is funded by IISER Kolkata, Ministry of Education, Government of India. § DATA AVAILABILITY We use total sunspot number data made available by WDC-SILSO[https://www.sidc.be/SILSO/datafileshttps://www.sidc.be/SILSO/datafiles], Royal Observatory of Belgium, Brussels. We also make use of Wilcox Solar Observatory synoptic charts[http://wso.stanford.edu/synopticl.htmlhttp://wso.stanford.edu/synopticl.html]. Scripts of our statistical analyses will be shared on reasonable requests to the corresponding author. mnras § DETERMINATION OF DECAY TIME INTERVAL OF DIPOLE MOMENT CYCLES
http://arxiv.org/abs/2307.01398v1
20230703233318
The Structure of Coronal Mass Ejections Recorded by the K-Coronagraph at Mauna Loa Solar Observatory
[ "Hongqiang Song", "Leping Li", "Zhenjun Zhou", "Lidong Xia", "Xin Cheng", "Yao Chen" ]
astro-ph.SR
[ "astro-ph.SR", "physics.space-ph" ]
Hongqiang Song [email protected] Shandong Provincial Key Laboratory of Optical Astronomy and Solar-Terrestrial Environment, and Institute of Space Sciences, Shandong University, Weihai, Shandong 264209, China National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China School of Atmospheric Sciences, Sun Yat-sen University, Zhuhai, Guangdong 519000, China Shandong Provincial Key Laboratory of Optical Astronomy and Solar-Terrestrial Environment, and Institute of Space Sciences, Shandong University, Weihai, Shandong 264209, China School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Shandong Provincial Key Laboratory of Optical Astronomy and Solar-Terrestrial Environment, and Institute of Space Sciences, Shandong University, Weihai, Shandong 264209, China Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China Previous survey studies reported that coronal mass ejections (CMEs) can exhibit various structures in white-light coronagraphs, and ∼30% of them have the typical three-part feature in the high corona (e.g., 2–6 R_⊙), which has been taken as the prototypical structure of CMEs. It is widely accepted that CMEs result from eruption of magnetic flux ropes (MFRs), and the three-part structure can be understood easily by means of the MFR eruption. It is interesting and significant to answer why only ∼30% of CMEs have the three-part feature in previous studies. Here we conduct a synthesis of the CME structure in the field of view (FOV) of K-Coronagraph (1.05–3 R_⊙). In total, 369 CMEs are observed from 2013 September to 2022 November. After inspecting the CMEs one by one through joint observations of the AIA, K-Coronagraph and LASCO/C2, we find 71 events according to the criteria: 1) limb event; 2) normal CME, i.e., angular width ≥ 30^∘; 3) K-Coronagraph caught the early eruption stage. All (or more than 90% considering several ambiguous events) of the 71 CMEs exhibit the three-part feature in the FOV of K-Coronagraph, while only 30–40% have the feature in the C2 FOV (2–6 R_⊙). For the first time, our studies show that 90–100% and 30–40% of normal CMEs possess the three-part structure in the low and high corona, respectively, which demonstrates that many CMEs can lose the three-part feature during their early evolutions, and strongly supports that most (if not all) CMEs have the MFR structures. § INTRODUCTION On 1971 September 29, the first orbiting white-light coronagraph <cit.> was launched on Orbiting Solar Observatory Number 7 <cit.>, and the coronagraph recorded the first images of a coronal mass ejection (CME) on 1971 December 14 <cit.>. From that time onwards, CMEs have been a subject of intense investigation in solar physics <cit.>. Theoretically, CMEs originate from eruption of magnetic flux ropes (MFRs), which can form prior to <cit.> and during <cit.> solar eruptions. In general, hot channels and coronal cavities are regarded as the proxies of MFRs in active regions <cit.> and quiet-Sun regions <cit.>, respectively. The MFR can provide support to the prominence against gravity <cit.>. Therefore, CMEs are usually associated with eruptions of hot channels (high-temperature ejecta) <cit.>, coronal cavities (middle-temperature ejecta) <cit.>, and/or prominences (low-temperature ejecta) <cit.> observationally. The white-light coronagraph on the Solar Maximum Mission satellite recorded a CME with three-part structure on 1980 August 5, i.e., a bright core within a dark cavity surrounded by a bright loop front <cit.> that included the low coronal observations of the CME from Mauna Loa MK3 coronameter. Since then, the three-part structure has become the prototypical structure of CMEs <cit.> though only ∼30% of CMEs exhibit the three-part feature in the high corona, e.g., 2–6 R_⊙. In several decades, it is widely accepted that the bright front originates from the plasma pileup along the MFR boundary, the cavity represents the MFR, and the bright core corresponds to the prominence <cit.>. However, recent studies pointed out that the traditional opinion is questionable, because some three-part CMEs are not associated with prominence eruptions at all <cit.>. Based on dual-viewpoint and seamless observations from the inner to outer corona, a new explanation on the three-part nature has been proposed, in which the bright frontal loop is formed due to the compression as the magnetic loops are successively pushed to stretch up by the underlying MFR <cit.>, the core can correspond to the MFR and/or prominence, and the dark cavity between the CME front and the MFR is a low-density zone with sheared magnetic field <cit.>. Recent observations clearly demonstrated that both hot channels <cit.> and coronal cavities <cit.> evolved into the bright core of three-part CMEs. The new explanation also points out that CMEs can lose the three-part feature gradually when propagating outwards, because the dark cavity vanishes due to the MFR expansion and growth through magnetic reconnections, and/or because the bright core fades away due to the prominence expansion and drainage <cit.>. This answers why only a portion of CMEs have the prototypical structure in previous survey studies <cit.>. As mentioned, CMEs result from MFR eruption, thus the new explanation predicts that all normal CMEs possess the three-part structure in the low corona, i.e., in the early eruption stage. Here the normal CMEs do not include the narrow ones with angular width less than 30^∘ for two factors: 1) the narrow events might be jets, instead of CMEs with small angular width; 2) to observe the structure of narrow CMEs, coronagraphs with higher spatial resolution are necessary. To examine whether all normal CMEs have the three-part structure in the early eruption stage, we conduct a survey study based on observations of the coronal solar magnetism observatory (COSMO) K-coronagraph (K-COR) from 2013 September to 2022 November. The paper is organized as follows. Section 2 introduces the related instruments and Methods. The observations and results are displayed in Section 3, which is followed by a summary and discussion in the final section. § INSTRUMENTS AND METHODS The Atmospheric Imaging Assembly (AIA) <cit.> on board the Solar Dynamics Observatory (SDO) <cit.> takes images of the Sun through seven EUV channels. The AIA has a field of view (FOV) of 1.3 R_⊙, a spatial resolution of 1.2 and a cadence of 12 s. Here we use the 131 Å (Fe XXI, ∼10 MK) and 304 Å (He II, ∼0.05 MK) to display the hot channel and prominence, respectively. The K-COR is one of three proposed instruments in the COSMO facility suite <cit.> located at the Mauna Loa Solar Observatory (MLSO), and records the coronal polarization brightness (pB) in the passband of 7200–7500 Å, which is formed by Thomson scattering of photospheric light from free electrons <cit.>. The FOV of K-COR is 1.05–3 R_⊙ with a pixel size of 5.5 and a nominal cadence of 15 s. The Large Angle and Spectrometric Coronagraph <cit.> on board the Solar and Heliospheric Observatory <cit.> comprises of three telescopes (C1, C2 and C3), each of which has an increasingly large FOV. Here the C2 (FOV: 2–6 R_⊙) is adopted to observe the CME structure in the outer corona. The images of space-borne LASCO/C2 have better contrast than those of the ground-based K-COR. The normalized radially graded filter (NRGF) is employed to increase the K-COR contrast, which flattens the steep brightness gradient of the corona <cit.>. In this paper, we examine the CME structure through the NRGF data of K-COR that are available online[www2.hao.ucar.edu/mlso/mlso-home-page], while for the C2 observations the original data are used. § OBSERVATIONS AND RESULTS The K-COR data are available since 2013 September 30, while the coronagraph is closed temporarily due to the volcanic eruption of Mauna Loa on 2022 November 27. Therefore, our survey covers an interval from 2013 to 2022, during which 369 CMEs (excluding the possible events) are identified. On the whole, more CMEs are recorded around solar maximum though the K-COR does not work 24 hours continuously. The basic information for each event, such as the date, time (Universal Time, UT), and location (E–east, S–south, W–west, and N–north), is listed on the MLSO website. After inspecting the 369 CMEs combining observations of the AIA, K-COR and C2, we find 71 events according to the criteria: 1) limb event, which requires that the source region centered within 30^∘ of the solar limb for the front-side events. For the far-side events, a limb event requires that the suspended prominence (or hot channel) prior to the eruption or the coronal disturbance (or post eruption arcade) during the eruption can be observed with the AIA; 2) normal CME, i.e., angular width ≥ 30^∘ in the C2 FOV; 3) K-COR caught the early eruption stage. We first scrutinize the 71 CMEs one by one through the NRGF images of K-COR, and find that all of them have the three-part feature in the low corona, irrespective of their appearance in the C2 images, agreeing with the prediction of the new explanation on the three-part structure of CMEs. However, visual identification of the three-part feature is not entirely objective as no quantitative criteria. There exist 6 events that do not have the clear three-part feature in the K-COR images (See Table 1), and several or all of them might be identified as the non-three-part CMEs. Therefore, we suggest that 90–100% of the 71 CMEs possess the three-part structure in the low corona. Table 1 lists the information of the 71 CMEs. The first column is the sequential number. Columns 2–4 give the date, time, and location of each event, which are from the MLSO website. The asterisks in Column 2 denote the 6 ambiguous events in the K-COR images mentioned above. Combining the observations of K-COR and AIA, we identify the type of source region, i.e., active region (AR) or quiet-Sun region (QS), and the ejecta, i.e., hot channel (HC) or prominence (P) for each event. The source type and ejecta are listed in Columns 5 and 6, respectively, where the “?” denotes that the source-region type or the ejecta are unsure, mainly because the events are located on the far side of the Sun. The subsequent four columns present CME information observed with LASCO, which are provided by the coordinated data analysis workshops (CDAW[https://cdaw.gsfc.nasa.gov]). Column 7 is the time of the CME's first appearance in the C2 FOV, and Columns 8–10 are the central position angle (PA), linear velocity (LV), as well as angular width (AW) correspondingly. The last column tells whether the CME exhibits the three-part feature in the C2 FOV, with Y/N denoting yes/no. Note that the asterisks in the last column indicate the ambiguous events in the C2 images. ccccccccccc 0pt The information of 71 limb CMEs in the K-COR and LASCO/C2 observations. Universal Time is used. No. Date K-Cor Time Location Source Ejecta First in C2 PA LV AW Three Part (yyyymmdd) (hhmm–hhmm) (hh:mm:ss) (∘) (km s^-1) (∘) in C2? 1 20140211 1845–1930 W-SW AR P 19:24:05 248 613 271 Y 2 20140220 2224–2250 W AR P 23:12:11 282 198 45 N 3 20140429 1940–0046 SW QS P 20:57:25 229 232 71 N 4 20140524 2108–2200 NE AR P 22:00:05 66 377 180 N 5 20140528 1714–2118 W-NW QS P 20:36:05 297 296 84 Y 6 20140614 1926–1940 E-SE AR HC 19:48:28 89 732 139 N 7 20140626 2114–0000 NE AR HC 21:48:57 41 497 231 N^* 8 20140630 1733–1848 SW QS P 18:36:05 261 262 72 N 9 20140923 2336–2349 NE AR HC 00:48:05 68 311 52 N 10 20141014 1836–2002 E-SE AR HC 18:48:06 90 848 360 Y 11 20141105 1923–1950 E-NE AR HC 19:48:05 76 608 203 N 12 20141210^* 1749–1958 SW AR P 18:00:06 322 1086 228 N 13 20141221 0048–0150 E-NE AR P 01:25:53 60 283 116 N 14 20141221 0152–0219 E-NE AR P 02:36:05 60 283 116 Y 15 20150208^* 2219–2250 E AR P 22:36:06 100 315 132 N 16 20150425 1803–1900 W-NW AR P 18:48:05 304 493 50 N 17 20150501^* 2156–2228 SW AR? P 22:12:05 264 253 83 N 18 20150505 2209–2300 NE AR P 22:24:05 41 715 360 N 19 20150516 0101–0200 NE QS P 00:12:06 42 600 177 Y 20 20150525 2110–0045 E QS P 23:12:11 81 374 120 N^* 21 20150702 1712–1904 NE AR? P 17:48:04 58 629 161 Y 22 20150801 1704–2200 NE QS P 17:36:04 65 472 67 N 23 20150923 1800–2000 SE QS P 18:36:04 106 565 99 Y 24 20151217 1922–2300 NE QS P 20:57:28 74 137 83 N 25 20160101 2256–2350 SW AR P 23:24:04 227 1730 360 Y 26 20160115 1940–2158 SW AR HC? 20:36:04 247 292 95 N 27 20160208 2220–0032 NE QS P 22:00:06 18 311 164 Y 28 20160209 1758–2050 E QS P 19:23:30 72 358 76 N 29 20160611 2206–2238 E-NE AR P 23:24:05 68 95 32 N 30 20160808 2022–2100 W AR? HC? 20:48:06 260 674 84 Y 31 20160808 1900–0130 W-NW QS? P? 01:25:43 311 128 66 N^* 32 20160929 1718–2050 SW AR HC 20:12:05 254 447 125 Y 33 20170128^* 1928–0216 W AR HC 21:48:05 280 562 53 N 34 20170327 1732–2010 E AR HC 18:12:05 89 230 46 N 35 20170402 1844–2020 W-NW AR HC 19:24:05 290 500 88 N 36 20170713 2008–2026 W AR P 20:36:05 260 290 61 N 37 20170720 1700–1902 W AR? P? 18:12:05 265 590 95 Y 38 20170820 1934–2104 E AR HC 20:24:05 88 207 43 N 39 20170912 1858–1941 W-SW AR? HC? 19:12:05 271 476 113 Y 40 20171020 2328–0014 SE AR P 00:00:05 98 331 109 N 41 20190422^* 0254–0422 W-NW QS? P 03:24:05 269 422 55 N 42 20201101 1900–2020 SW AR HC 19:48:05 266 289 36 N 43 20201126 2020–2105 NE AR HC 21:12:10 99 572 92 Y 44 20210429 1701–2146 NE QS P 20:01:34 75 189 129 N^* 45 20210507 1852–2005 E AR HC 19:24:05 76 754 114 N 46 20210610 1746–1904 E-NE AR? P 18:24:05 83 833 133 Y 47 20210625 2017–0145 SW QS P 00:48:05 234 101 30 N 48 20210626 2132–0210 W-NW AR P 03:48:05 247 223 115 N 49 20210715 2110–2344 SE QS P 21:36:05 166 1476 360 Y 50 20210719^* 2022–2122 E AR HC 20:57:05 69 401 133 N^* 51 20210829 1956–2040 SW AR P 20:24:05 259 1060 87 N 52 20211009 2030–0125 W-NW AR HC 22:36:05 275 433 110 Y 53 20211010 2235–0150 NW AR P 23:24:05 293 299 71 N 54 20211102 2130–2353 SE AR HC 22:24:05 113 474 98 N 55 20211103 2050–2216 W-SW AR P 21:36:05 260 510 360 N 56 20220131 2328–0001 SW QS P 00:12:05 245 469 52 N 57 20220201 2300–0213 SW QS P 01:25:48 243 467 43 N^* 58 20220419 2043–2204 W-SW AR HC 21:24:05 227 247 53 N 59 20220425 1723–1854 SE AR P 18:00:05 85 319 120 N 60 20220425 1836–2131 SE AR P 20:24:05 90 498 125 Y 61 20220508 2124–2250 SE AR HC 22:24:05 94 602 175 Y 62 20220511 1820–1950 W AR P 18:36:05 237 1100 194 N 63 20220514 1653–2106 SE QS P 18:24:05 115 843 52 N 64 20220524 2219–2324 NE AR P 23:12:11 71 569 211 Y 65 20220710 1658–1931 SW QS P 17:48:05 198 1241 43 N 66 20220731 2236–0010 E AR P 23:12:10 82 1122 192 N 67 20220830 1744–1927 S-SW AR HC 18:12:05 268 1247 360 Y 68 20220924 1957–2339 SE QS P 20:24:05 132 337 103 N^* 69 20220928 1718–1818 E AR P 17:36:05 87 256 115 N 70 20221026 1900–2231 SW QS P 21:12:09 207 506 167 N 71 20221125 2128–2356 NW QS P 22:24:05 312 620 106 N Table 1 shows that 49 and 22 CMEs originate from active regions (AR) and quiet-Sun regions (QS), respectively. For the ejecta type, there are 49 events associated with a prominence (P) eruption, and the rest are correlated with a hot-channel (HC) eruption. In the LASCO FOV, the linear velocities range from 95 to 1730 km s^-1, and their angular widths, from 30 to 360^∘. After scrutinizing the 71 CMEs one by one through the C2 images, we find that only 21 events (∼30%) possess the three-part structure, agreeing with previous statistical results based on C2 observations <cit.>. For the 50 non-three-part CMEs in the C2 images, 7 ones are ambiguous and might be identified as the three-part events, which are denoted with asterisks in the last column as mentioned. Therefore, we suggest that 30–40% of the 71 CMEs possess the three-part structure in the high corona. To demonstrate that a hot channel or prominence can lead to a three-part CME in its early eruption stage, and the three-part feature can sustain or disappear in the outer corona, we select four representative events and display them in Figures 1–4 sequentially. Figure 1 displays the event occurring on 2014 October 14 (Event 10 in Table 1), which resulted from a hot-channel eruption in an active region located at the SE limb. Panel (a) shows the hot channel with the AIA 131 Å observation at 18:45:32 UT. Panel (b) presents the K-COR observation at 18:58:51 UT, and the three-part CME can be identified. The bright core is very obvious, and the bright front is delineated with the red-dashed line as we can not discern the front clearly in the static image. The bright front and three-part structure can be distinguished clearly through the animation accompanying with Figure 1. The C2 image at 20:00:05 UT is presented in Panel (c), and we can see that the CME keeps the typical three-part feature there. Note that the white circles in both Panels (b) and (c) denote the solar limb. Figure 2 presents the CME occurring on 2016 January 1 (Event 25 in Table 1). This event is associated with a prominence eruption that can be observed in the AIA 304 Å image as shown in Panel (a). Panel (b) displays the K-COR observation at 23:23:42 UT, in which the bright front and core are delineated with the red- and yellow-dashed lines, respectively, to display the three-part structure clearly. Please see the accompanied animation to examine the three-part feature continuously. This CME also exhibits the three-part feature in the C2 FOV as shown in Panel (c), where the red-dashed line denotes the leading front. The above two CMEs exhibit the three-part structure in both K-COR and C2 images. Next we'll show two events that do not have the three-part feature in the C2 FOV. Figure 3 displays a CME induced by the hot-channel eruption as revealed with the AIA 131 Å observation in Panel (a). This event occurred on 2021 May 7 (Event 45 in Table 1) and a previous study <cit.> has demonstrated that the hot channel evolved into the bright core in the K-COR image as shown in Panel (b). The three-part feature is distinguishable directly in the static image, thus no dashed lines are guide for the eye in this panel. Panel (c) shows the observation of C2 at 19:48:05 UT with the red-dashed line delineating the CME front. The CME lost its three-part feature due to the MFR expansion and growth as mentioned <cit.>. Figure 4 shows the CME occurring on 2021 October 10 (Event 53 in Table 1), which is associated with the prominence eruption from an active region located at the NW limb. Panel (a) illustrates the erupting prominence with the AIA 304 Å image at 22:51:29 UT. The K-COR observations present a typical three-part CME as shown in Panel (b), where the red-dashed line depicts the CME front. Please see the accompanied animation to view the three-part feature in the K-COR images. The animation also demonstrates that the prominence did not erupt outward eventually. This leads to a non-three-part CME in the C2 image <cit.> as presented in Panel (c), where the red-dashed line delineates the CME front. § SUMMARY AND DISCUSSION To verify the new explanation on the three-part structure of CMEs, which predicts that all normal CMEs should exhibit the three-part feature in their early eruption stage, we conducted a survey study on the CME structure with the observations of K-COR (FOV: 1.05–3 R_⊙) at MLSO. In total, 369 CMEs (excluding the possible events) are identified from 2013 September to 2022 November. Combining the observations of AIA, K-COR and LASCO/C2, we inspected the events one by one manually, and found 71 events according to the criteria: 1) limb event; 2) normal CME with angular width ≥ 30^∘ in the C2 FOV (2–6 R_⊙); 3) K-COR caught the early eruption stage. The results showed that 90–100% of the 71 events exhibit the three-part structure in the K-COR observations, basically agreeing with the prediction of the new explanation, and 30–40% of the events have the three-part appearance in the C2 observations, consistent with previous survey studies <cit.>. These suggest that CMEs can lose the three-part feature during their propagation outwards, and further support the new explanation on the nature of the three-part structure of CMEs<cit.>. As mentioned, theoretical studies demonstrate that CMEs result from MFR eruption, and no physical mechanism can produce large-scale CMEs without involving an MFR. However, do all CMEs have an MFR structure near the Sun observationally? Our current survey study intends to answer “Yes" to this question, as 90–100% of normal CMEs exhibit the three-part structure in their early eruption stage. We think that the several ambiguous events could exhibit the three-part feature if the observations were clearer. For the narrow CMEs (angular width 30^∘), we speculate that they can also exhibit the three-part feature when the spatial resolution of coronagraphs is high enough. The CME and MFR are called ICME (interplanetary CME) and magnetic cloud <cit.>, respectively, after they leave the corona. If the MFR structures are not destroyed during their propagation, all ICMEs should possess the magnetic cloud features near 1 au, such as enhanced magnetic-field intensity, large and smooth rotation of the magnetic-field direction, and low proton temperature or low plasma β <cit.>. However, only about one third of ICMEs have the magnetic cloud features near the Earth <cit.>. From the statistical point of view, this might be a result of glancing cuts between the spacecraft and ICME, as ICMEs with magnetic cloud features have narrower sheath region compared to the non-cloud ICMEs <cit.>. The analyses on the morphological structure of CMEs near the Sun and the geometric character of ICMEs near 1 au support that most (if not all) CMEs have the MFR structures. We thank the anonymous referee for the comments and suggestions that helped to improve the original manuscript. We are grateful to Profs. Jie Zhang (GMU), Pengfei Chen (NJU), Yuandeng Shen (YNAO) and Mr. Zihao Yang (PKU) for their helpful discussions. This work is supported by the National Key R&D Program of China 2022YFF0503003 (2022YFF0503000), the NSFC grants U2031109, 11790303 (11790300), and 12073042. H.Q.S is also supported by the CAS grants XDA-17040507. The authors acknowledge the use of data from the SDO, MLSO, and SOHO, as well as the usage of the CDAW CME catalog generated by NASA and The Catholic University of America and the MLSO activity tables created by the MLSO team. natexlab#1#1 [Brueckner et al.(1995)Brueckner, Howard, Koomen, Korendyke, Michels, Moses, Socker, Dere, Lamy, Llebaria, Bout, Schwenn, Simnett, Bedford, & Eyles]brueckner95 Brueckner, G. E., Howard, R. A., Koomen, M. J., et al. 1995, , 162, 357, 10.1007/BF00733434 [Burlaga et al.(1981)Burlaga, Sittler, Mariani, & Schwenn]burlaga81 Burlaga, L., Sittler, E., Mariani, F., & Schwenn, R. 1981, , 86, 6673, 10.1029/JA086iA08p06673 [Chen(2009)]chenpengfei09 Chen, P. F. 2009, , 698, L112, 10.1088/0004-637X/698/2/L112 [Chen(2011)]chenpengfei11 Chen, P. F. 2011, Living Reviews in Solar Physics, 8, 1, 10.12942/lrsp-2011-1 [Chen et al.(2018)Chen, Tian, Su, Qu, Deng, Jibben, Yang, Zhang, Samanta, He, Wang, Zhu, Zhong, & Liang]chenyajie18 Chen, Y., Tian, H., Su, Y., et al. 2018, , 856, 21, 10.3847/1538-4357/aaaf68 [Cheng et al.(2013)Cheng, Zhang, Ding, Olmedo, Sun, Guo, & Liu]chengxin13a Cheng, X., Zhang, J., Ding, M. D., et al. 2013, , 769, L25, 10.1088/2041-8205/769/2/L25 [Chi et al.(2016)Chi, Shen, Wang, Xu, Ye, & Wang]chiyutian16 Chi, Y., Shen, C., Wang, Y., et al. 2016, , 291, 2419, 10.1007/s11207-016-0971-5 [Domingo et al.(1995)Domingo, Fleck, & Poland]domingo95 Domingo, V., Fleck, B., & Poland, A. I. 1995, , 162, 1, 10.1007/BF00733425 [Follett et al.(1974)Follett, Ostwald, Simpson, & Spencer]follett74 Follett, W. H., Ostwald, L. T., Simpson, J. O., & Spencer, T. M. 1974, Journal of Spacecraft and Rockets, 11, 327, 10.2514/3.62071 [Forland et al.(2013)Forland, Gibson, Dove, Rachmeler, & Fan]forland13 Forland, B. C., Gibson, S. E., Dove, J. B., Rachmeler, L. A., & Fan, Y. 2013, , 288, 603, 10.1007/s11207-013-0361-1 [Gibson et al.(2006)Gibson, Foster, Burkepile, de Toma, & Stanger]gibson06b Gibson, S. E., Foster, D., Burkepile, J., de Toma, G., & Stanger, A. 2006, , 641, 590, 10.1086/500446 [Gopalswamy et al.(2003)Gopalswamy, Shimojo, Lu, Yashiro, Shibasaki, & Howard]gopalswamy03 Gopalswamy, N., Shimojo, M., Lu, W., et al. 2003, , 586, 562, 10.1086/367614 [Hayes et al.(2001)Hayes, Vourlidas, & Howard]hayes01 Hayes, A. P., Vourlidas, A., & Howard, R. A. 2001, , 548, 1081, 10.1086/319029 [Howard(2006)]howardR06 Howard, R. A. 2006, Geophysical Monograph Series, 165, 7, 10.1029/165GM03 [Howard et al.(2017)Howard, DeForest, Schneck, & Alden]howard17 Howard, T. A., DeForest, C. E., Schneck, U. G., & Alden, C. R. 2017, , 834, 86, 10.3847/1538-4357/834/1/86 [Illing & Hundhausen(1985)]illing85 Illing, R. M. E., & Hundhausen, A. J. 1985, , 90, 275, 10.1029/JA090iA01p00275 [Jiang et al.(2021)Jiang, Feng, Liu, Yan, Hu, Moore, Duan, Cui, Zuo, Wang, & Wei]jiangchaowei21 Jiang, C., Feng, X., Liu, R., et al. 2021, Nature Astronomy, 5, 1126, 10.1038/s41550-021-01414-z [Kliem et al.(2021)Kliem, Lee, Liu, White, Liu, & Masuda]kliem21 Kliem, B., Lee, J., Liu, R., et al. 2021, , 909, 91, 10.3847/1538-4357/abda37 [Koomen et al.(1975)Koomen, Detwiler, Brueckner, Cooper, & Tousey]koomen75 Koomen, M. J., Detwiler, C. R., Brueckner, G. E., Cooper, H. W., & Tousey, R. 1975, , 14, 743, 10.1364/AO.14.000743 [Lemen et al.(2012)Lemen, Title, Akin, Boerner, Chou, Drake, Duncan, Edwards, Friedlaender, Heyman, Hurlburt, Katz, Kushner, Levay, Lindgren, Mathur, McFeaters, Mitchell, Rehse, Schrijver, Springer, Stern, Tarbell, Wuelser, Wolfson, Yanari, Bookbinder, Cheimets, Caldwell, Deluca, Gates, Golub, Park, Podgorski, Bush, Scherrer, Gummin, Smith, Auker, Jerram, Pool, Soufli, Windt, Beardsley, Clapp, Lang, & Waltham]lemen12 Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, , 275, 17, 10.1007/s11207-011-9776-8 [Li et al.(2012)Li, Zhang, Li, Yang, & Zhang]lileping12 Li, L. P., Zhang, J., Li, T., Yang, S. H., & Zhang, Y. Z. 2012, , 539, A7, 10.1051/0004-6361/201015796 [Low(2001)]lowbc01 Low, B. C. 2001, , 106, 25141, 10.1029/2000JA004015 [Morgan et al.(2006)Morgan, Habbal, & Woo]morgan06 Morgan, H., Habbal, S. R., & Woo, R. 2006, , 236, 263, 10.1007/s11207-006-0113-6 [Ouyang et al.(2015)Ouyang, Yang, & Chen]ouyang15 Ouyang, Y., Yang, K., & Chen, P. F. 2015, , 815, 72, 10.1088/0004-637X/815/1/72 [Patsourakos et al.(2013)Patsourakos, Vourlidas, & Stenborg]patsourakos13 Patsourakos, S., Vourlidas, A., & Stenborg, G. 2013, , 764, 125, 10.1088/0004-637X/764/2/125 [Pesnell et al.(2012)Pesnell, Thompson, & Chamberlin]pesnell12 Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, , 275, 3, 10.1007/s11207-011-9841-3 [Song et al.(2022)Song, Li, & Chen]song22b Song, H., Li, L., & Chen, Y. 2022, , 933, 68, 10.3847/1538-4357/ac7239 [Song & Yao(2020)]song20b Song, H., & Yao, S. 2020, Sci China Tech Sci, 63, 2171, 10.1007/s11431-020-1680-y [Song et al.(2023)Song, Zhang, Li, Yang, Xia, Zheng, & Chen]song23a Song, H., Zhang, J., Li, L., et al. 2023, , 942, 19, 10.3847/1538-4357/aca6e0 [Song et al.(2018)Song, Chen, Qiu, Chen, Zhang, Cheng, Shen, & Zheng]song18a Song, H. Q., Chen, Y., Qiu, J., et al. 2018, , 857, L21, 10.3847/2041-8213/aabcc3 [Song et al.(2015a)Song, Chen, Zhang, Cheng, Wang, Hu, Li, & Wang]song15b Song, H. Q., Chen, Y., Zhang, J., et al. 2015a, , 808, L15, 10.1088/2041-8205/808/1/L15 [Song et al.(2014)Song, Zhang, Chen, & Cheng]song14a Song, H. Q., Zhang, J., Chen, Y., & Cheng, X. 2014, , 792, L40, 10.1088/2041-8205/792/2/L40 [Song et al.(2015b)Song, Zhang, Chen, Cheng, Li, & Wang]song15c Song, H. Q., Zhang, J., Chen, Y., et al. 2015b, , 803, 96, 10.1088/0004-637X/803/2/96 [Song et al.(2019a)Song, Zhang, Cheng, Li, Tang, Wang, Zheng, & Chen]song19a Song, H. Q., Zhang, J., Cheng, X., et al. 2019a, , 883, 43, 10.3847/1538-4357/ab304c [Song et al.(2019b)Song, Zhang, Li, Liu, Zhu, Wang, Zheng, & Chen]song19b Song, H. Q., Zhang, J., Li, L. P., et al. 2019b, , 887, 124, 10.3847/1538-4357/ab50b6 [Song et al.(2017)Song, Cheng, Chen, Zhang, Wang, Li, Li, Hu, & Li]song17b Song, H. Q., Cheng, X., Chen, Y., et al. 2017, , 848, 21, 10.3847/1538-4357/aa8d1a [Song et al.(2020)Song, Zhang, Cheng, Li, Hu, Li, Chen, Zheng, & Chen]song20a Song, H. Q., Zhang, J., Cheng, X., et al. 2020, , 901, L21, 10.3847/2041-8213/abb6ec [Tomczyk et al.(2016)Tomczyk, Landi, Burkepile, Casini, DeLuca, Fan, Gibson, Lin, McIntosh, Solomon, Toma, Wijn, & Zhang]tomczyk16 Tomczyk, S., Landi, E., Burkepile, J. T., et al. 2016, Journal of Geophysical Research (Space Physics), 121, 7470, 10.1002/2016JA022871 [Tousey(1973)]tousey73 Tousey, R. 1973, in Space Research XIII, ed. M. J. Rycroft & S. K. Runcorn (Berlin: Akademie-Verlag), 713 https://ui.adsabs.harvard.edu/abs/1973spre.conf..713T/abstract [Vourlidas et al.(2013)Vourlidas, Lynch, Howard, & Li]vourlidas13 Vourlidas, A., Lynch, B. J., Howard, R. A., & Li, Y. 2013, , 284, 179, 10.1007/s11207-012-0084-8 [Wang et al.(2022)Wang, Cheng, Song, & Ding]wangbitao22 Wang, B. T., Cheng, X., Song, H. Q., & Ding, M. D. 2022, , 666, A166, 10.1051/0004-6361/202244275 [Wang et al.(2017)Wang, Liu, Wang, Hu, Shen, Jiang, & Zhu]wangwensi17 Wang, W., Liu, R., Wang, Y., et al. 2017, Nature Communications, 8, 1330, 10.1038/s41467-017-01207-x [Wang & Stenborg(2010)]wangyimin08 Wang, Y. M., & Stenborg, G. 2010, , 719, L181, 10.1088/2041-8205/719/2/L181 [Webb & Howard(2012)]webb12 Webb, D. F., & Howard, T. A. 2012, Living Reviews in Solar Physics, 9, 3, 10.12942/lrsp-2012-3 [Wu & Lepping(2011)]wucc11 Wu, C.-C., & Lepping, R. P. 2011, , 269, 141, 10.1007/s11207-010-9684-3 [Yan et al.(2016)Yan, Priest, Guo, Xue, Wang, & Yang]yanxiaoli16 Yan, X. L., Priest, E. R., Guo, Q. L., et al. 2016, , 832, 23, 10.3847/0004-637X/832/1/23 [Zhang et al.(2012)Zhang, Cheng, & Ding]zhangjie12 Zhang, J., Cheng, X., & Ding, M.-D. 2012, Nature Communications, 3, 747, 10.1038/ncomms1753 [Zhou et al.(2023a)Zhou, Ji, & Zhang]zhouyuhao23 Zhou, Y., Ji, H., & Zhang, Q. 2023a, , 298, 35, 10.1007/s11207-023-02126-5 [Zhou et al.(2023b)Zhou, Jiang, Song, Wang, Hao, & Cui]zhouzhenjun23 Zhou, Z., Jiang, C., Song, H., et al. 2023b, , 944, 175, 10.3847/1538-4357/acb6f8 [Zurbuchen & Richardson(2006)]zurbuchen06 Zurbuchen, T. H., & Richardson, I. G. 2006, , 123, 31, 10.1007/s11214-006-9010-4
http://arxiv.org/abs/2307.01069v1
20230703145014
Shi-NeSS: Detecting Good and Stable Keypoints with a Neural Stability Score
[ "Konstantin Pakulev", "Alexander Vakhitov", "Gonzalo Ferrer" ]
cs.CV
[ "cs.CV" ]
Implications of Nano-Hertz Gravitational Waves on Electroweak Phase Transition in the Singlet Dark Matter Model Yang Zhang August 1, 2023 =================================================================================================================== Learning a feature point detector presents a challenge both due to the ambiguity of the definition of a keypoint and correspondingly the need for a specially prepared ground truth labels for such points. In our work, we address both of these issues by utilizing a combination of a hand-crafted Shi detector and a neural network. We build on the principled and localized keypoints provided by the Shi detector and perform their selection using the keypoint stability score regressed by the neural network - Neural Stability Score (NeSS). Therefore, our method is named Shi-NeSS since it combines the Shi detector and the properties of the keypoint stability score, and it only requires for training sets of images without dataset pre-labeling or the need for reconstructed correspondence labels. We evaluate Shi-NeSS on HPatches, ScanNet, MegaDepth and IMC-PT, demonstrating state-of-the-art performance and good generalization on downstream tasks. We will share our code upon acceptance of the paper. § INTRODUCTION Feature point detection (keypoint detection) is usually the first step in camera localization and scene reconstruction pipelines based on sparse features, commonly used in robotics <cit.>, computer vision <cit.>, augmented, mixed and virtual reality <cit.>, and other systems. Whilst feature description is successfully approached in the literature as a metric learning <cit.> problem, applying deep learning to the feature detection task still poses a challenge, with classical solutions showing competitive results on the state-of-the-art benchmarks <cit.>. The difficulty is caused by the innate vagueness of the point of interest definition <cit.> which greatly complicates the formulation of feature detection as a learning problem. The data required to learn keypoints with sufficient robustness to illumination and viewpoint changes presents another challenge. Structure-from-Motion (SfM) <cit.> and Multi-View Stereo (MVS) <cit.> reconstructions that are used by some methods <cit.> to obtain pixel-accurate correspondences are hard to build properly and lack full image coverage <cit.>. We approach the problem of “defining a keypoint” by leveraging principled assumptions that make up reasonable keypoints <cit.>: we employ the Shi <cit.> detector to provide locations for keypoints as well as to perform sub-pixel localization. At the same time, we propose a quantitative metric to measure the stability of the detected keypoints, the keypoint stability score, that randomly perturbs the detected points and its surrounding patches for later aggregation of their statistics. This metric, can be calculated in an online fashion and from a single image, so it is ideal for automatically generating a supervised signal to train a neural network that predicts it, the Neural Stability Score (NeSS). Shi-NeSS empowers a classical method with a neural approach, that together obtain accurate locations of keypoints that are more likely to remain stable under viewpoint changes. Compared to the prior art, Shi-NeSS needs only a set of real images for training, without any labels, ground-truth poses or reconstructed correspondences. In our work, we present the design that does not rely on ground truth point correspondences alongside <cit.>. These methods are sometimes referred to as self-supervised <cit.>. Out of self-supervised methods only Key.Net <cit.> and REKD <cit.> work in a similar setting, while SuperPoint <cit.> relies on pre-labeled datasets and requires pre-training a base detector on a synthetic dataset (see Table <ref>). Our contributions are as follows. We present a novel metric, the Keypoint Stability Score, to estimate the quality of keypoints. We propose a method to directly learn keypoint stability scores in a neural network, which requires only images for training. In terms of pose accuracy, it surpasses state-of-the-art on MegaDepth <cit.>, is on par on IMC-PT <cit.>, ScanNet <cit.> and HPatches <cit.>, being the best among the self-supervised methods. § RELATED WORKS Handcrafted detectors. Since Moravec <cit.>, feature detection methods focused on finding the local extrema in the signals derived from images that correspond to meaningful structures, e.g. corners <cit.>. Therefore, the earliest designs of detectors employ the grayvalue analysis via differential expressions <cit.>. Among those, Harris <cit.> and Shi <cit.> detectors state that good features should be 'trackable', i.e. strongly distinguishable in a small image neighbourhood, and the methods <cit.> follow this idea. We employ the Shi detector for finding keypoint candidate locations due to its efficiency. Depending on the image resolution or scale both the location and the strength of keypoints response change <cit.>. The most productive way to tackle this problem is the modeling of scale via the Gaussian scale-space <cit.> - the backbone of some famous methods like SIFT <cit.>, SURF <cit.>, Harris-Laplace <cit.>, Harris-Affine <cit.>, KAZE <cit.>. In our method, instead of using the scale space maxima, we predict a score using the neural network that assesses the stability of a point to viewpoint changes (see Sec. <ref>). Learned detectors. Existing designs of learned detectors are quite diverse. The majority of the methods employ correspondence labels for training which are obtained via SfM and MVS <cit.>, or optical flow <cit.>. Preparing dense pixel-accurate ground-truth correspondence labels poses the problem by itself <cit.> - special pre-processing of the data and validation of the obtained results is required <cit.>. Although avoiding the surface reconstruction can ease the problem formulation <cit.> accurate poses can still be hard to obtain <cit.>. In this regard, methods that do not require reconstructed correspondences, such as <cit.>, or the method proposed here, present a viable alternative. SuperPoint <cit.> relies on Homographic Adaptation, i.e. generation of homographies for creating correspondences, to train on real images with supervision from a base detector trained on synthetic data. Key.Net <cit.> creates a dataset of image patches by generating homographies and employs hand-crafted filters for finding potential keypoints, using scale space modeling. Our approach doesn't require pre-labeling or creating synthetic datasets or designing a base detector, and it does not use any scale-space analysis. We modify the Homographic Adaptation and generate the required correspondences online during the training, leveraging a specially designed keypoint quality measure. REKD <cit.> uses a rotation-equivariant network to detect keypoints together with orientations, using image sets without additional annotations for training similarly to our method. § METHOD Shi-NeSS is a combination of the hand-crafted Shi detector and a neural network (see Fig. <ref>). We briefly present the formulation of the Shi detector as well as describe the sub-pixel localization procedure in Sec. <ref>. In Sec. <ref> we describe the measure that allow to quantitatively assess keypoints, the keypoint stability score. Sec. <ref> discusses the training process of the neural network which aims to predict the keypoint stability score, the neural stability score (NeSS), and Sec. <ref> describes the training process. Finally, Sec. <ref> gives an account of implementation details. §.§ Shi detector We use the Shi detector to get the locations of feature points. For each pixel we calculate the second-moment matrix using the Gaussian weighting function <cit.> and assign it with a score that is equal to the smallest eigenvalue of the second-moment matrix <cit.>. By applying the non-maximum suppression over the obtained score map we get the required locations of keypoints. We further refine locations of feature points following <cit.>. By performing the second order Taylor expansion of the Shi score function 𝒮(𝐱) around a point 𝐱 we can find the correction 𝐝𝐱 that maximizes 𝒮(𝐱) by solving: 𝐝𝐱=-∂^2 𝒮/∂𝐱^2^-1∂𝒮/∂𝐱. To avoid unnecessary calculations the sub-pixel localization procedure is applied only to the best-k feature points selected according to the keypoint stability score regressed by the neural network (Neural Stability Score). We describe the scores in the next sections. §.§ Keypoint Stability Score In this work, we design a quantitative description to assess feature points detected by the Shi detector. Given a keypoint 𝐤 we apply to it a set of generated homographies {ℋ_j}^m_1 to produce a set of m warped keypoints {𝐤^'_j}^m_1: 𝐤^'_j = ℋ_j𝐤. We generate homographies using a modified Homographic Adaptation procedure of SuperPoint <cit.> by sampling random perspective distortions while restricting the deformation along x- and y- axes. We do not add rotation or translation transformations since the Shi detector with Gaussian weighting is invariant to them <cit.>. Next, we create a grid 𝐆_j of the predetermined size s_𝑝𝑎𝑡𝑐ℎ around each of the warped points 𝐤^'_j, warp the grid back to the image coordinates and sample values in these locations to get a deformed patch 𝐏_j around 𝐤^'_j: 𝐏_j = 𝑠𝑎𝑚𝑝𝑙𝑒(ℋ^-1_j𝐆_j). The Shi detector is not invariant to scale <cit.> or perspective transformations. Hence, in general, we cannot compare its responses from different patches and have to consider only locations of maximum responses. We run the Shi detector f_𝑆ℎ𝑖 on each patch getting a score map 𝐒_j = {𝐬_l}^s_𝑝𝑎𝑡𝑐ℎ^2_1 and extract from it the location 𝐥̂_j that corresponds to the maximum activation: 𝐥̂_j = _𝐬_l f_𝑆ℎ𝑖(𝐏_j). Next, we warp each 𝐥̂_j back to the original reference frame and calculate the sample covariance Σ with respect to the original position 𝐤: Σ = ∑^m_j=1(ℋ^-1_i𝐥̂_j - 𝐤)(ℋ^-1_j𝐥̂_j - 𝐤)^⊤/m - 1. Finally, we can characterize a feature point by the largest of eigenvalues λ_1 and λ_2 of Σ: λ = max(λ_1, λ_2) = Σ_2. The keypoint stability score λ captures the maximum variation of a point from its location under perspective transformations of its neighbourhood. The measure prioritizes feature points that deviate the least from the initial location 𝐤 in any direction and, hence, are more likely to be accurately detected and localized. Since λ assesses keypoints based on their local perturbations, the patch of size s_𝑝𝑎𝑡𝑐ℎ is set to be small. This particular trait makes a foundation for the training process of the Neural Stability Score regression network, which is discussed next. §.§ Training process: Neural Stability Score Providing full ground-truth labeling with λ of every pixel of every full-resolution image in a dataset using our method can be computationally expensive. So instead, in line with several works <cit.>, we select a number of feature points during the training using the trained-so-far weights of the model. More specifically, for each image we extract n feature points {𝐤_i}^n_1, Shi scores {𝐬_i}^n_1 and the Neural Stability Scores provided by the neural network {λ̂_i}^n_1 (see Fig. <ref>). Relying on the Shi detector allows to faster guide the neural network towards better points in the beginning of the training. For each point 𝐤_i we calculate the ground-truth stability score λ_i using the keypoint Stability Score (Eq. <ref>). Due to the small size of s_𝑝𝑎𝑡𝑐ℎ we can use a large number of samples for the calculation of Σ_i as well as apply the same set of generated transformations {ℋ_j}^m_1 to all extracted feature points {𝐤_i}^n_1 simultaneously. The combination of online feature extraction and fast calculation of λ allows us to avoid time-consuming dataset pre-labeling. Not every point with a good λ score makes a good training target - image artifacts due to the absence of photometric changes in the generation procedure of the keypoint stability score can be stable as well. A common trait of these points is that they tend to have low salience and, hence, can be filtered out by the Shi detector assuming a threshold t_𝑆ℎ𝑖. We define 1(𝐬_i > t_𝑆ℎ𝑖) as an indicator function that gives 1 if 𝐬_i larger than t_𝑆ℎ𝑖 and 0 otherwise. Applying filtering considerably improves the performance of the method, see details in the supplementary material. Finally, we learn {λ̂_i}^n_1 by formulating the training objective as a regression problem: L = 0.5∑^n_i=1 (λ̂_i - λ_i)^2 1(𝐬_i > t_𝑆ℎ𝑖)/∑^n_i=11(𝐬_i > t_𝑆ℎ𝑖). §.§ Implementation details We train our detector from scratch on the same subset of MegaDepth dataset <cit.> as used in DISK <cit.>. We do not use poses, depth maps or any other information from the dataset other than images. For training we use full-resolution images cropped to a square of length 560 pixels. We perform validation and model selection on the validation subset of the IMC-PT dataset <cit.>. In the implementation we set s_𝑝𝑎𝑡𝑐ℎ equal to the size of the non-maximum suppression kernel - 5. However, since we need to calculate the Shi responses for each pixel of 𝐏_j the actual patch size needs to be larger - for our configuration of the Shi detector that value is 11. Hence, we warp the larger grid to get 𝐏_j but when doing the maximum extraction (see Eq. <ref>) we consider only the square with size s_𝑝𝑎𝑡𝑐ℎ pixels around its center. We choose n=1024 and m=100, see the details in the supplementary material. When extracting keypoints for convenience of the implementation instead of using λ̂ directly we consider e^-λ̂. We use a typical U-Net architecture <cit.> with 4 down-sampling layers and 3x3 convolutions. To train the model we employ Adam <cit.> optimizer with learning rate 10^-4. The model that is used in evaluations in Sec. <ref> was trained on a single NVIDIA 2080 Ti GPU for 22 hours (including validations). In this section we have briefly introduced the Shi detector with sub-pixel refinement, defined the novel keypoint reliability metric - the keypoint stability score, described the Neural Stability Score as an approximation of the keypoint stability score with the help of the neural network, and outlined the implementation details of the neural network. We will proceed with the experimental evaluation of Shi-NeSS. § EXPERIMENTS The means of detector evaluation deserve special attention. Originally, detectors were mostly evaluated using classical metrics like repeatability and matching score <cit.>. Since feature points require supplying them with a description in order to obtain the correspondences a descriptor has to be accounted for to ensure the proper assessment of the quality of detected keypoints. A common solution is to fix a descriptor for all detectors in the evaluation <cit.>. More recent publications reported evidence that gains in classical metrics do not necessarily translate to the gains in the downstream tasks performance <cit.>. For this reason, in our work, we mostly perform the evaluation on a range of downstream tasks. To get a more comprehensive assessment we test our method on a variety of datasets and tasks. Image resolution plays an important role in the accuracy of correspondences, thus we provide images in the original resolution to all methods in our evaluation. It also allows to assess the ability of a detector to generalize beyond the training resolution. The number of keypoints that is extracted from each image plays no lesser role: we found the regime of 2048 keypoints per image from <cit.> to be a good trade-off between the performance and the consumption of computational resources. To compute matches we use mutual nearest neighbour matching and employ the Lowe ratio test <cit.> for downstream tasks <cit.>. As choosing proper hyper-parameters for a method is of utmost importance for downstream tasks <cit.> we employ a hyper-parameter tuning procedure similar to one in <cit.>. In our evaluation we consider the following methods: Shi <cit.>, SIFT <cit.>, SuperPoint <cit.>, R2D2 <cit.>, Key.Net <cit.>, DISK <cit.> and REKD <cit.>. We extract feature points for all methods using only the original scale with the exception of Key.Net <cit.> and REKD <cit.> that have multi-scaling as an essential built-in part of the method. To ensure fair comparison we do not employ the orientation estimation for REKD <cit.> since other methods in the evaluation don't have it. As long as the sub-pixel localization is a part of our solution, the version of the Shi <cit.> detector employed in our evaluation includes it since we focus on assessing the influence of the Neural Stability Score. §.§ Evaluation on HPatches dataset HPatches <cit.> dataset features sequences with planar surfaces related by homographies under a variety of illumination and viewpoint changes. We use the same test subset as in <cit.> totaling 540 image pairs among which 260 image pairs are with illumination changes and 280 - with viewpoint. DISK <cit.> descriptor shows the state-of-the-art performance on HPatches thus we pick it for this dataset. We report Mean Matching Accuracy (MMA) <cit.> under different pixel thresholds following <cit.>. Additionally, we evaluate on a downstream task of homography estimation using a protocol similar to <cit.>. In particular, we get features, calculate a list of putative matches and use OpenCV's <cit.> routines to estimate a homography. Next, we check the quality of the estimate by warping 4 image corners and calculating the average re-projection error compared to warping with the ground-truth homography <cit.>. We report accuracy as the percentage of image pairs out of the whole dataset with the average re-projection error lower than a threshold for different thresholds, thus, obtaining a curve. Additionally, we report mean Average Accuracy <cit.> by integrating the curve up to a 5-pixel threshold to get a single-number quality measure for each category. For tuning hyper-parameters we use sequences of HPatches <cit.> left out from the test set <cit.> as well as hyper-parameters obtained from validation sequences of IMC-PT <cit.> on the task of relative pose estimation. This combination provides the best results for all methods, see details in the supplementary material. Results. Compared to Shi <cit.> detector that shows one of the best results on MMA our method considerably lags behind as illustrated in Fig. <ref>. However, evaluation on the task of homography estimation completely changes the ranking - our method shows strong performance on scenes with viewpoint changes (see Fig. <ref> and Table <ref>). These results correlate with recent findings that classical methods can show state-of-the-art performance if tuned properly as well as that conventional metrics might not fully capture the complicated dependency between features and downstream tasks <cit.>. Still, it is worth noting that MMA evaluation highlights the lack of invariance of our feature points to illumination changes - this is reflected in the homography estimation evaluation as well. We believe that it is caused by the absence of illumination changes modeling in our method. §.§ Relative pose evaluation We evaluate on a downstream task of relative pose estimation following the protocol of <cit.>. We use our own evaluation pipeline to provide consistency in evaluations across different datasets. We take pairs of images, detect and describe features, calculate a list of putative matches, perform relative pose estimation and, finally, measure the error between the ground-truth and estimated poses. To evaluate the performance over a whole dataset we build a pose estimation accuracy curve by thresholding errors over a range of thresholds. Then we integrate the area under the curve to get mean Average Accuracy (mAA) <cit.>. We report mAA up to a 10-degree threshold for both rotation and translation as well as provide accuracy curve plots. Errors for rotation and translation are calculated in degrees <cit.>. §.§.§ Evaluation on IMC-PT dataset IMC-PT dataset <cit.> is a collection of photo-tourism images supplied with depth maps and poses reconstructed via SfM and MVS that features landmarks (mostly buildings). We use a full test set release that consists of 800 unique images from 8 different locations. By considering image pairs with co-visibility larger than 0.1 <cit.> we get 37k test image pairs. Like on HPatches, DISK <cit.> descriptor shows the state-of-the-art performance on this dataset - hence we choose it. We utilize a robust fundamental matrix estimator with DEGENSAC <cit.>. We calculate the essential matrix from the estimated fundamental matrix using ground-truth intrinsics and then recover the poses using OpenCV <cit.>. The tuning of hyper-parameters is performed on the validation subset of IMC-PT <cit.>, see details in the supplementary material. Results. Fig. <ref> illustrates that our method consistently outperforms all methods other than DISK <cit.>. Comparison in Table <ref> shows that we outperform self-supervised approaches like SuperPoint <cit.>, Key.Net <cit.> and REKD <cit.> by a huge margin. We explain the gap between DISK <cit.> and our method by the difference in the strategies that methods employ to detect points. DISK <cit.> tends to densely detect points on semantically meaningful objects (mostly, buildings) whereas our method doesn't employ any knowledge of the scene and instead relies on the local properties of points resulting in sparsified detections. Given that IMC-PT <cit.> contains a lot of extreme viewpoint and scale changes the former strategy looks prevailing in this situation. §.§.§ Evaluation on MegaDepth dataset MegaDepth <cit.> dataset is another collection of photo-tourism images that also provides depth maps and poses. As IMC-PT dataset has a limited diversity of the scenes and their semantics and has only 800 unique images we create a custom test set from MegaDepth dataset to perform the assessment with a larger diversity of data. In particular, our test set consists of 7.5k image pairs with 6k unique images sampled from 25 scenes belonging to 5 semantically different categories. This test set doesn't have any intersections neither with validation nor with test sequences of IMC-PT. Since both IMC-PT and MegaDepth belong to the category of outdoors/photo-tourism datasets their evaluation pipelines and hyper-parameters are shared. Results. Contents of Fig. <ref> and Table <ref> show that evaluation on a more diverse set of data closes the gap between methods - now our method is marginally better than DISK <cit.>. Given that our method and DISK <cit.> share the same training set of MegaDepth <cit.> it is reasonable that they come on top in this evaluation. SuperPoint <cit.> is the second-best method in the category of methods that don't employ reconstructed correspondence labels. Although our gains in rotation estimation compared to it are marginal we obtain noticeably better translation estimates overall. §.§.§ Evaluation on ScanNet dataset To assess the generalization ability of our method we perform the evaluation on ScanNet <cit.> dataset that contains indoor sequences with camera poses and depth maps. Following <cit.> we create validation and test sets by sampling pairs from video sequences with different gaps between frames. Our test set consists of 21k image pairs with 39k unique images sampled from 100 test sequences of the dataset. We perform the evaluation using HardNet <cit.> descriptor - on this dataset it shows performance that is superior to DISK <cit.> for every method. As the ScanNet dataset is captured on a single RGB camera we employ a robust essential matrix estimator from OpenGV <cit.> - the rest of the pipeline remains the same as in previous evaluations. We use the validation subset of ScanNet for tuning hyper-parameters, see details in the supplementary material. Results. Contrary to photo-tourism datasets that depict objects with a lot of texture, ScanNet <cit.> indoor environments contain a lot of surfaces with little to no texture. We believe that this is the main reason why DISK <cit.> shows significantly inferior performance on this set of data (see Table <ref> and Fig. <ref>). Our method, on the other hand, can consistently cope with the challenge and shows the second-best result. Given the large size of this test set it achieves considerable improvements compared to SuperPoint <cit.>, Key.Net <cit.> and REKD <cit.> over all thresholds (see Fig. <ref>). R2D2 <cit.> achieves the best performance on this dataset - we found that the method is able to provide consistent matches on images with little texture and poor illumination conditions by performing detection along the contours of objects. Summarizing, the experiments show that Shi-NeSS is the only method among the top-three across all the datasets. It has better generalization ability compared to the state-of-the-art. Moreover, Shi-NeSS has the best performance among the self-supervised methods listed in Tab. <ref>. § CONCLUSION In this work, we proposed the Shi-NeSS detector that combines the hand-crafted Shi detector and the Neural Stability Score. The method doesn't require any reconstructed correspondence labels and can be trained from arbitrary sets of images without the need for dataset pre-labeling. It achieves state-of-the-art performance on a variety of datasets and downstream tasks, has good generalization, and consistently outperforms other self-supervised methods. In the future, we plan to address the main limitation of our method - the lack of illumination invariance as well as use inferences from the evaluation of methods like DISK and R2D2 to improve our keypoint detection strategy. NeSS may be used as a weight in non-linear pose refinement or metric learning of feature descriptors. unsrt
http://arxiv.org/abs/2307.02636v1
20230705201002
Influence of interface-induced valley-Zeeman and spin-orbit couplings\\ on transport in graphene-on-WSe$_{2}$ heterostructures
[ "M. Zubair", "P. Vasilopoulos", "M. Tahir" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "cond-mat.str-el" ]
[email protected] Department of Physics, Concordia University, 7141 Sherbrooke Ouest, Montreal, Quebec H4B 1R6, Canada [email protected] Department of Physics, Concordia University, 7141 Sherbrooke Ouest, Montreal, Quebec H4B 1R6, Canada [email protected] Department of Physics, Colorado State University, Fort Collins, CO 80523, USA We investigate the electronic dispersion and transport properties of graphene/WSe_2 heterostructures in the presence of a proximity induced spin-orbit coupling (SOC) using a low-energy Hamiltonian, with different types of symmetry breaking terms, obtained from a four-band, first and second nearest-neighbour tight-binding (TB) one. The competition between different perturbation terms leads to inverted SOC bands. Further, we study the effect of symmetry breaking terms on ac and dc transport by evaluating the corresponding conductivities within linear response theory. The scattering-independent part of the valley-Hall conductivity, as a function of the Fermi energy E_F, is mostly negative in the ranges -λ_R⩽ E_F and E_F⩾λ_R when the strength λ_R of the Rashba SOC increases except for a very narrow region around E_F=0 in which it peaks sharply upward. The scattering-dependent diffusive conductivity increases linearly with electron density, is directly proportional to λ_R in the low- and high-density regimes, but weakens for λ_R=0. We investigate the optical response in the presence of a SOC-tunable band gap for variable E_F. An interesting feature of this SOC tuning is that it can be used to switch on and off the Drude-type intraband response. Furthermore, the ac conductivity exhibits interband responses due to the Rashba SOC. We also show that the valley-Hall conductivity changes sign when E_F is comparable to λ_R and vanishes at higher values of E_F. It also exhibits a strong dependence on temperature and a considerable structure as a function of the frequency. Influence of interface-induced valley-Zeeman and spin-orbit couplings on transport in graphene-on-WSe_2 heterostructures M. Tahir August 1, 2023 ========================================================================================================================= § INTRODUCTION Two-dimensional (2D) materials have become a hot topic in solid state physics, especially since the discovery of graphene, both theoretically and experimentally because of their prominent mechanical, optical, electrical and magnetic properties <cit.>. Recently graphene has attracted a lot of attention in the field of spintronics due to its large electronic mobility, low spin-orbit coupling (SOC), negligible hyperfine interaction and gate tunability <cit.>. For a clear example, it has been proven that graphene exhibits a very long spin relaxation length even at room temperature <cit.>. Due to the weak SOC though, it is not a suitable candidate for the observation of important spin-dependent phenomena including the spin-Hall effect <cit.> and anomalous Hall effect <cit.>. To render graphene useful in spintronics, several experimental groups used different techniques to tailor the SOC strength in it through coupling with foreign atoms or materials <cit.>, such as graphene hydrogenation <cit.> or fluorination <cit.> as well as heavy adatom decoration <cit.>. However, these approaches not only reduce the transport quality, but also make it difficult to reproduce <cit.> and detect <cit.> the induced SOC. To overcome these difficulties, graphene is recently grown on different novel 2D materials, which are ideal candidates to induce SOC via proximity effects <cit.>. Hexagonal boron nitride (BN) has a weak SOC, and therefore, is not a suitable substrate for the proximity effect <cit.>. The family of 2D transition metal dichalcogenides (TMDCs) are the next best candidates, which have large direct band gaps and giant intrinsic SOC <cit.>. In this respect, graphene on TMDCs has been investigated for transport <cit.> as well as intriguing technological applications, including field-effect tunnelling transistors (FETTs), radio-frequency oscillators, and efficient phototransistors <cit.>. Also, the proximity-induced SOC in graphene/TMDCs heterostructures has recently been shown to depend <cit.> on the twist angle between the lattice of graphene and that of the TMDC. In addition, it has been found in room-temperature experimental studies of the spin-Hall effect that few-layer WS_2 induces a large SOC in graphene, about 17 meV <cit.> as compared to the very weak one in pristine graphene <cit.>. Also, it has been unambiguously demonstrated experimentally that a room-temperature spin-Hall effect in graphene is induced by MoS_2 proximity <cit.>. Moreover, when graphene is placed on a multilayer WS_2 substrate, an additional valley-Zeeman SOC, due to the broken sublattice symmetry, along with the Rashba SOC have been predicted theoretically and observed experimentally <cit.>. This SOC induces a spin splitting of degenerate bands, with out-of-plane spin polarization at the K and K^' points, and an opposite spin splitting in different valleys. Analogous to the Zeeman splitting, the SOC is termed valley-Zeeman because the effective Zeeman fields are valley-dependent. It is the dominant SOC in TMDCs and is also predicted to be induced in graphene on TMDCs <cit.>. To our knowledge though, apart from some spin-transport studies <cit.> and two experimental magneto-transport studies <cit.>, neither ac and dc scattering-dependent charge transport nor the simultaneous effect of valley-Zeeman and Rashba SOCs have been theoretically studied in graphene on TMDCs. In this work we study in detail the effect of the valley-Zeeman and Rashba-type SOCs on ac and dc transport in graphene/WSe_2 heterostructures. There results a mexican hat dispersion <cit.> contrary to other family memebers of TMDCs , e.g., MoS_2, WS_2 etc. <cit.>. Such a dispersion leads to more features in the optical conductivity when the Fermi level moves between the minimum and maximum of the mexican hat. Also, we compare our results with those for pristine graphene. In Sec. II we specify the Hamiltonian and obtain the eigenvalues and eigenfunctions in the presence of symmetry breaking terms. In Sec. III we present general expressions for the conductivities and provide numerical results. Conclusions and a summary follow in Sec. IV. § FORMULATION Graphene is a 2D, one-atom thick planar sheet of bonded carbon atoms densely packed in a honeycomb structure as shown in Fig. <ref> (a). The lattice structure can be viewed as a triangular lattice with two sites A (red filled spheres) and B (blue filled spheres) per unit cell. The arrows indicate the primitive lattice vectors a⃗_1= a (1, 0) and a⃗_2= a(1/2, √(3)/2), with a the triangular lattice constant of the structure, and span the graphene lattice. Further, a⃗_1 and a⃗_2 generate the reciprocal lattice vectors of the Brillouin zone, cf. Fig. <ref> (b), given by b⃗_1=4π/√(3) a (√(3)/2, - 1/2) and b⃗_2=4π/√(3) a (0, 1). From the explicit expressions of b⃗_1 and b⃗_2 we find the two inequivalent Dirac points (valleys) given by K⃗ = (4 π/3a) (1, 0) and K⃗^'=(4 π/3 a) (1/2, √(3)/2). The monolayer graphene system is described by the four-band, second nearest-neighbour tight-binding (TB) Hamiltonian <cit.> H = ∑_⟨ i,j ⟩,α t c_iα^† c_jα +∑_i αΔη_c_i c_iα^† c_iα + ∑_⟨⟨ i,j ⟩⟩Δ_ij c_iα^† c_jα^' + 2 i3∑_⟨ i,j ⟩∑_αα^' c_iα^† c_jα^' [ λ_R (s×𝐝̂_ij)_z]_αα^'. Here Δ_ij=iλ_c_iν_ij s_z /3 √(3), c_iα^† creates an electron with spin polarization α at site i that belongs to sublattice A or B, and ⟨ i,j ⟩ (⟨⟨ i,j ⟩⟩) runs over the nearest (second nearest) neighbouring sites. The second term is a staggered on-site potential, which takes into account the effective energy difference experienced by atoms at the lattice sites A (η_c_i=+1) and B (η_c_i=-1), respectively. The third and fourth terms represent the proximity-induced enhancement of the SOC due to a weak hybridization with the heavy atoms in WSe_2. The third term is the valley-Zeeman SOC where ν_ij=+1, if the second nearest hopping is anticlockwise with respect to the positive z axis, and ν_ij=-1 if it is clockwise. The last term is the Rashba SOC parametrized by λ_R. It arises because the inversion symmetry is broken when the graphene sheet is placed on top of WSe_2 as shown in Fig.<ref> (c). Also, 𝐝̂_ij = 𝐝_ij / |𝐝_ij|, where s= (s_x,s_y,s_z) is the Pauli spin matrix and 𝐝_ij the vector connecting the sites i and j in the same sublattice. In Fig. <ref> we plot the numerically evaluated energy dispersion of Eq. (<ref>) to better understand the characteristics of the induced intrinsic SOCs. Near the K point, for λ_c_i= λ_R= 0, the band structure has linear band crossings near k=0 as can be seen from Fig. <ref> (a). For λ_c_i≠ 0 and λ_R=0 the spectrum is gapless and the spin degeneracy is broken away from k=0, see Fig. <ref> (b). Further, if only λ_R is present, the spectrum is also gapless, cf. Fig. <ref> (c). However, a gap is created when both λ_c_i and λ_R are finite, cf. Fig. <ref> (d). We analyze the physics of electrons near the Fermi energy using a low-energy effective Hamiltonian derived from Eq. (<ref>) and a Dirac theory around the K and K^' valleys <cit.>. It reads H^s_η=v_F(ησ_xp_x+σ_yp_y)+Δσ_z+ λσ_0 s η +λ_R(η s_yσ_x-s_xσ_y). Here η=±1 denotes the valleys K and K^', Δ is the mass term that breaks the inversion symmetry, λ =λ_c_i is the valley-Zeeman SOC strength, λ_R the Rashba type SOC strength, (σ_x, σ_y, σ_z) the Pauli matrix that corresponds to the pseudospin (i.e., A-B sublattice), σ_0 is the unit matrix in the sublattice space, and v_F (8.2 × 10^5 m/s) denotes the Fermi velocity of Dirac fermions. For simplicity, we neglect the intrinsic SOC λ_i and consider only the λ_R> λ_i case. Also, we expect that small but finite values of λ_i do not qualitatively affect our results as long as λ >> λ_i. Further, we will also neglect the Δ term in our numerical treatment because λ>> Δ. Upon diagonalizing Eq. (<ref>) we obtain the dispersion E_ξ(k) = l [Δ^2+λ^2+ ℏ^2 v_F^2 k^2+2λ_R^2+ 2 s √(Υ) ]^1/2, where Υ = λ_R^2( λ_R^2-2λΔ) + ℏ^2 v_F^2 k^2(λ_R^2+λ^2) +λ^2Δ^2 and ξ ={l,s}. Further, l= +1 (-1) denotes the conduction (valence) band and s= +1 (-1) represents the spin-up (spin-down) branches. Notice that Eq. (<ref>) has a valley degeneracy despite the valley-Zeeman term. The normalized eigenfunctions for both valleys are ψ_ξ^+ (k) = N_ξ^+√(S_0)[ 1; A_ξ^η e^iϕ; -i B_ξ^η e^iϕ; -i C_ξ^η e^2iϕ ] e^i k· r, ψ_ξ^- (k) = N_ξ^-√(S_0)[ - A_ξ^η e^iϕ; 1; i C_ξ^η e^2iϕ; -i B_ξ^η e^iϕ ] e^i k· r, respectively, with N_ξ^η = l[1 + ( A_ξ^η) ^2 + ( B_ξ^η) ^2 + ( C_ξ^η) ^2 ]^-1/2, S_0=L_xL_y the area of the sample, and ϕ = tan^-1(k_y/k_x). Further, A_ξ^η = (E_ξ^η - ηΔ -ηλ) / ℏ v_F k, B_ξ^η = 2λ_R[ (E_ξ^η)^2 -( Δ + λ ) ^2]/ ℏ v_F k [ ( E_ξ^η + ηλ ) ^2- Δ^2- ℏ^2 v_F^2 k^2], and C_ξ^η = 2 λ_R ( E_ξ^η -ηΔ -ηλ ) / [( E_ξ^η + ηλ ) ^2-Δ^2- ℏ^2 v_F^2 k^2]. We plot Eq. (<ref>) in Fig. <ref> for different combinations of the λ and λ_R terms whose realistic values fall in the ranges 5-6 meV and 10-15 meV, respectively, as determined experimentally in Ref. nnn7. Here, the larger values of SOCs are used just to see well-resolved bands splitting. For λ= λ_R= 0, the band structure has linear bands crossing near k=0 for both valleys as can be seen from panel (a). For λ≠ 0 and λ_R= 0, the energy dispersion is spin non-degenerate and valley degenerate with a gapless behaviour as shown in panel (b). Further, the energy dispersion shows the gapless behaviour for λ= 0 and λ_R≠ 0 whereas it is spin-split as seen from panel (c). However, for λ and λ_R finite, the Rashba coupling not only creates a gap between the conduction and valence band, by mixing the spin-up and spin-down states, but also produces an avoided crossing, see Fig. <ref> (d). The analytical form of the momentum k_1, at which an avoided crossing occurs, and of the gap E_g=Δ_1 are k_1=1ℏ v_F [(λ^2+λΔ)(λ^2+2 λ_R^2-λΔ)λ^2+ λ_R^2]^1/2, Δ_1=2λ_R[λ^2 + Δ(2 λ+Δ)λ^2+ λ_R^2]^1/2. The density of states (DOS) per unit area corresponding to Eq. (<ref>) is given by D(E)=∑_ζδ(E-E_ζ) with |ζ⟩= |ξ, η ,k ⟩. For λ_R=0 it takes the simple form D(E)=12πℏ^2 v_F^2∑_ξ|El-s λ|Θ(El-s λ-Δ), and for Δ=λ=0 the form D(E)=12πℏ^2 v_F^2∑_ξ|El-s λ_R|Θ(El-(s+1) λ_R). The DOS is shown in Fig. <ref> for several values of λ and λ_R. The black curve is for monolayer graphene, with λ=λ_R=0, and is included for comparison. The E_+- and E_++ dispersions give rise to a square root singularity at E=λλ_R/ √(λ^2+λ_R^2) and a step at E= √(λ^2+ 4 λ_R^2), respectively, as shown by the black dot-dashed curve of Fig. <ref>. The origin of the singularity is the mexican-hat energy dispersion, cf. Fig. <ref>. In addition, the step emerges from the bottom of the E_++ band and is a van Hove singularity associated with the dispersion flattening at this point. The square root singularity is calculated near the mexican-hat minimum E=λλ_R/ √(λ^2+λ_R^2) at which D(E) reads D(E)= k_14 πℏ√(2m^∗E-Δ_1), with m^∗= λ_R (λ^2+λ_R^2)^3/2/ 2 v_F^2λ (λ^2+2 λ_R^2) the effective mass and E_+,-=Δ_1+(ℏ^2/2m^∗)(k-k_1)^2 the energy. This singularity is similar to that of the one-dimensional density of states. In the limit λ_R=0 and λ≠ 0, the DOS has a finite value λ/2 πℏ^2 v_F^2 at E=0 (see blue dashed curve). For E ⩾λ, it increases linearly with E. Also, for λ=0 and λ_R≠ 0, it is finite at E=0 but has a step at E=2 λ_R, see the red dotted curve. § CONDUCTIVITIES We consider a many-body system described by the Hamiltonian H = H_0 + H_I - 𝐑 · 𝐅(t), where H_0 is the unperturbed part, H_I is a binary-type interaction (e.g., between electrons and impurities or phonons), and - 𝐑 · 𝐅(t) is the interaction of the system with the external field F(t) <cit.>. For conductivity problems we have 𝐅(t) = e 𝐄(t), where 𝐄(t) is the electric field, e the electron charge, 𝐑 = ∑_i r_i , and 𝐫_i the position operator of electron i. In the representation in which H_0 is diagonal the many-body density operator ρ = ρ^d + ρ^nd has a diagonal part ρ^d and a nondiagonal part ρ^nd. For weak electric fields and weak scattering potentials, for which the first Born approximation applies, the conductivity tensor has a diagonal part σ_μν^d and a nondiagonal part σ_μν^nd; the total conductivity is σ_μν^T = σ_μν^d + σ_μν^nd, μ,ν = x,y. In general we have two kinds of currents, diffusive and hopping, with σ_μν^d = σ_μν^dif + σ_μν^col, but usually only one of them is present. If no magnetic field is present, the hop- ping term σ_μν^col vanishes identically and only the term σ_μν^dif survives. For elastic scattering it is given by <cit.> σ_μν^d (ω) = β e^2S_0∑_ζ f_ζ (1 - f_ζ ) v_νζ v_μζ τ_ζ1 + iωτ_ζ , with τ_ζ the momentum relaxation time, ω the frequency, and v_μζ the diagonal matrix elements of the velocity operator. Further, f_ζ = [1 + exp [β (E_ζ - E_F)]]^-1 is the Fermi-Dirac distribution function, β = 1/k_BT and T the temperature. Regarding the contribution σ_μν^nd one can use the identity f_ζ (1 - f_ζ^')[1 - exp [β (E_ζ - E_ζ^')]] = f_ζ - f_ζ^' and cast the original form in the more familiar one <cit.> σ_μν^nd (ω) = iℏ e^2S_0∑_ζ≠ζ^'(f_ζ - f_ζ^') v_νζζ^' v_μζζ^'(E_ζ - E_ζ^')(E_ζ - E_ζ^' + ℏω - i Γ ) , where the sum runs over all quantum numbers ζ and ζ^' with ζ≠ζ^'. The infinitesimal quantity ϵ in the original form has been replaced by Γ_ζ to account for the broadening of the energy levels. In Eq. (<ref>) v_νζζ^' and v_μζζ^' are the off-diagonal matrix elements of the velocity operator. The relevant velocity operators are given by v_x= ∂ H / ℏ∂ k_x and v_y= ∂ H / ℏ∂ k_y. With ζ={l,s.k, η}={ξ, k, η} for brevity, they read ⟨ζ| v_x|ζ^'⟩ = v_F N_ξ^ηN_ξ^'^η (D_ξ,ξ^'^η e^iϕ +F_ξ,ξ^'^η e^-iϕ ) δ_k,k^', ⟨ζ^'| v_y|ζ⟩ = i v_F N_ξ^ηN_ξ^'^η ( D_ξ,ξ^'^η e^-iϕ - F_ξ,ξ^'^η e^iϕ ) δ_k,k^', where D_ξ,ξ^'^η= A_ξ^'^η+ B_ξ^η C_ξ^'^η and F_ξ,ξ^'^η= A_ξ^η+ B_ξ^'^η C_ξ^η. We now calculate the conductivity σ_yx^nd(i ω) given by Eq. (<ref>). Further, the velocity matrix elements (<ref>) and (<ref>) are diagonal in k, therefore k will be suppressed in order to simplify the notation. The summation in Eq. (<ref>) runs over all quantum numbers ξ,ξ^', η, η^', and k. The parameter Γ_ηη^'^ξξ^', that takes into account the level broadening, is assumed to be independent of the band and valley indices, i.e., Γ_ηη^'^ξξ^'=Γ. Using Eqs. (<ref>) and (<ref>) we can express Eq. (<ref>) as σ_yx^nd(i ω) = e^2ℏ^2 v_F^2h∑_ξξ^'∫ dk k (N_ξ^ηN_ξ^'^η)^2 (f_ξ k^η- f_ξ^'k^η)Δ_ξξ^'^η[ ( Δ_ξξ^'^η+ℏω)^2+ Γ^2] ×[Δ_ξξ^'^η+ℏω- i Γ) ][D_ξ,ξ^'^η)^2 - (F_ξ,ξ^'^η)^2] where Δ_ξξ^'^η= E_ξ k^η - E_ξ^' k^η. Further, in the limit Γ=ω=0, Eq. (<ref>) reduces to σ_yx^nd = e^2ℏ^2 v_F^2h∑_ξξ^'∫ dk k (N_ξ^ηN_ξ^'^η)^2 (f_ξ k^η- f_ξ^'k^η)(Δ_ξξ^'^η)^2 ×[(D_ξ,ξ^'^η)^2 - (F_ξ,ξ^'^η)^2] In the valley-Hall effect electrons from regions near the inequivalent K and K^' valleys flow to opposite transverse edges of the system, in the presence of SOCs when a longitudinal electric field is applied <cit.>. The valley-Hall conductivity corresponding to Eq. (<ref>) is defined by σ_yx^v = ∑_s s^'σ_yx^nd (η=+,s, s^') - σ_yx^nd (η=-,s, s^'). The spin-Hall conductivity σ_yx^s, corresponding to Eq. (<ref>), is finite only when both the Kane-Mele and valley- Zeeman SOCs are present. Hence, even in the presence of Rashba SOC, σ_yx^s vanishes <cit.>. Since a spin current is defined by J_s = ( ℏ / 2e) (J_↑-J_↓), we have to multiply σ_yx^v by 1/2e <cit.>. Further, we find that charge Hall conductivity always vanishes σ_yx^c = ∑_η s s^'σ_yx^nd (η,s, s^')=0 The component σ_xx^nd(i ω) is also obtained from Eq. (<ref>): σ_xx^nd(i ω) = i e^2ℏ^2 v_F^2 h∑_ηξξ^'∫ dk k (N_ξ^ηN_ξ^'^η)^2 (f_ξ k^η- f_ξ^'k^η)Δ_ξξ^'^η[ ( Δ_ξξ^'^η+ℏω)^2+ Γ^2] ×[Δ_ξξ^'^η+ℏω- i Γ) ][D_ξ,ξ^'^η)^2 + (F_ξ,ξ^'^η)^2]. For λ=0 and λ_R≠ 0, Eq. (<ref>) vanishes because the factor (D_ξ,ξ^'^η)^2 - (F_ξ,ξ^'^η)^2 becomes zero, whereas Eq. (<ref>) survives. Moreover, in the limit λ=λ_R=0, Eq. (<ref>) reduces to the optical conductivity of pristine graphene, which is independent of ℏω and given by e^2/2h <cit.>. We now consider the diagonal component σ_xx^d given by Eq. (<ref>). Using Eq. (<ref>), with ξ=ξ^', we obtain σ_xx^d (i ω) = e^2 v_F^2βπ∑_ηξ∫ dk k (N_ξ^η)^4 f_ξ k^η (1- f_ξ k^η) ×(A_ξ^η+ B_ξ^η C_ξ^η)^2 τ_ξ k^η1+ i ωτ_ξ k^η At very low temperatures we can make the approximation β f_ξ k^η (1- f_ξ k^η)≈δ(E_ξ-E_F) and τ_ξ k^η=τ_ξ k_F^η because all states untill the Fermi level are occupied. In Fig. <ref> we plot Eq. (<ref>) in the dc limit (ω=0) as a function of E_F for Γ=0.2 meV, λ = 3 meV and for different values of λ_R. When E_F is in the gap, i.e., in the range -λλ_R/√(λ^2+λ_R^2) ⩽ E_F⩽λλ_R/√(λ^2+λ_R^2), the valley-Hall conductivity is quantized in units of 2e/2h similar to the case of gapped graphene and topological insulators <cit.>. The reason is that the factor ∑_ηξξ^' (N_ξ^ηN_ξ^'^η)^2 [(D_ξ,ξ^'^η)^2 - (F_ξ,ξ^'^η)^2]/(Δ_ξξ^'^η)^2, called Berry curvature Ω(k), of Eq. (<ref>) in the limit ω=0 has a peak, which is well covered by occupied states for E_F> λλ_R/√(λ^2+λ_R^2). As a consequence, the valley-Hall conductivity approaches the quantized value. For λλ_R/√(λ^2+λ_R^2) ⩽ E_F⩽λ_R, σ_yx^v decreases with E_F. Further, as can be seen, when E_F becomes comparable to λ_R, a sign change occurs in the conductivity which later vanishes at higher values of E_F, E_F>>√(λ^2+ 4 λ_R^2). The change in sign is due to the Rashba coupling between the spin-up and spin-down bands. Furthermore, this off-diagonal term in spin space permits transitions between two conduction spin subbands (see Eq. (<ref>)), that could be interpreted as spin-flip transitions near the band touching. In addition, the coupling strength between opposite spin bands becomes weaker as λ_R increases. As a result, the negative part of the conductivity due to the spin-up band diminishes and σ_yx^v shows the usual behaviour of gapped graphene and topological insulators <cit.>. Further, as can be seen in the inset, the band gap increases with λ_R. Also, the value of the conductivity at E_F=0 is due to the finite one of Γ (= 0.2 meV); if we take Γ=0, the conductivity diverges at E_F=0 but its overall qualitative behavior remains as shown. We now take into account the effect of temperature T on the valley-Hall conductivity contained in the Fermi function, which is independent of electron-phonon interaction in the first Born approximation <cit.>. The valley-Hall conductivity is evaluated numerically with the help of Eq. (<ref>) and plotted in Fig. <ref> for four values of T. We find a strong T dependence, particularly when the Fermi level is in the gap. The quantization of the valley-Hall conductivity is destroyed at high values of T. This occurs when the thermal broadening k_BT becomes comparable to the energy gap. Notice that the effect of temperature on σ_yx^v is similar to that on the spin-Hall conductivity in a graphene/MoS_2 heterostructure by considering valley-Zeeman and Kane-Mele SOCs in the absence of the Rashba SOC. Various transition energies, which play an important role in the optical conductivity, are shown in Fig. <ref> for λ,λ_R≠ 0. Their analytical expressions are displayed in table <ref>. Notice that for E_F= 6.6 meV, the energies Δ_a and Δ_b, indicated with black arrows, become also important in optical transitions, since E_F crosses the curve E_+- at two values of the momentum. However, for E_F=9.6 meV, only Δ_b contributes to optical transitions because E_F cuts E_+- curve only at one value of the momentum. In Fig. <ref>, we show possible allowed interband and intraband transitions by contrasting the case λ≠ 0, λ_R=0 in the upper panels and the case λ = 0, λ_R≠ 0 in the lower panels. The blue arrows represent the interband transitions E_+-→ E_++ for 0 < E_F < λ and 0 < E_F < λ_R as can be seen in Fig. <ref> (b) and (d). The black arrows represent the allowed interband transitions E_-+→ E_+- (E_++) and E_–→ E_+- (E_++) for E_F =0 and E_F≠ 0, repectively, while the red arrows indicate intraband transitions that occur near E_F. Now we present results for the real part of Eqs. (<ref>) and (<ref>) (Reσ_xx=Reσ_xx^d+Reσ_xx^nd), evaluated numerically, versus ℏω using a Lorentzian form of Dirac delta function and taking Γ=0.2 meV for T ≠ 0. We start from the upper panel of Fig. <ref> by considering the case λ≠ 0 and λ_R=0. The transitions are vertical for photon's momentum q∼ 0 and connect the filled valence band to empty conduction band, see Fig. <ref> (a). For the case of E_F=0, intraband response appears due to the transition E_+-→ E_+- and has a δ function form, centred around ℏω=0, which broadens the peak when any kind of scattering is taken into account. Further, intraband responses occur when the Fermi level is located away from the Dirac point. For ℏω=2 λ we obtain another Dirac delta peak due to the transition from E_–→ E_+-, which is also broadened through i πδ(x)=lim_Γ→ 0(1/x - i Γ), cf. Eq. (<ref>). For 0< E_F< λ, the new absorption peaks appear at ℏω= 2E_F and ℏω= 2 (E_F+λ) due to the possible transitions E_–→ E_+- and E_-+→ E_+-. For E_F>λ, the absorption peaks disappear below ℏω < 2λ because the transition E_–→ E_+- is no longer possible due to the filling of states below the Fermi level that are Pauli blocked. Further, the Drude peak persists at low ℏω, but now two other pieces of interband transitions emerge with onsets at Δ_a+E_F and Δ_b+E_F. In the lower panel of Fig. <ref> we show the results for real part of the longitudinal conductivity for λ=0, λ_R≠ 0 for different values of E_F. For E_F=0, we can see that there is a peak at 2λ_R which is the separation between E_– and E_+- bands. In addition, there is a kink at 4λ_R due to the transition E_-+→ E_++. As we increase the Fermi level, say, 0 <E_F< λ_R and E_F> λ_R, the peak becomes sharper and we see a onset of a Drude contribution at low ℏω due to intraband transitions E_+-→ E_+- and E_++→ E_++ in contrast to E_F=0 case (black dot-dashed curve). Further, for finite values of E_F, we see the steps at 2E_F similar to monolayer graphene (λ=λ_R=0) as well as features at Δ_a+E_F, Δ_b-E_F, and Δ_b+E_F above which we attain the flat absorption like pristine graphene <cit.>. Note that our results are similar to bilayer graphene <cit.>. But here, the Rashba SOC, which allows the interband transitions between opposite spin bands, gives rise to the absorption peaks, while these peaks in bilayer graphene are due to interlayer hopping between two graphene sheets. The real part of the longitudinal conductivity as a function of the photon energy, for λ,λ_R≠ 0, is show in Fig. <ref> for several values of E_F: (i) just below the maximum of the mexican hat i.e. λλ_R/(λ^2 + λ_R^2)^1/2 < E_F< λ (ii) just above the mexican hat, i.e., for λ < E_F< (λ^2 + 2 λ_R^2)^1/2. For E_F=0 we find a large absorption peak at approximately 2λ_R, which corresponds to transitions between the two square-root singularities of the DOS, see Fig. <ref>, or transitions between the two minima of the mexican hat structures of the E_– and E_-+ bands. As E_F moves into the mexican hat, this feature disappears because states below E_F are occupied and, therefore, Pauli blocked. Further, the major peaks are due to the transitions E_+-→ E_++, E_–→ E_-+, E_–→ E_++ and E_-+→ E_++, respectively. The gap energies which contribute to the onset of these transition peaks are indicated in Fig. <ref> and given analytically in table <ref>. Also, the conductivity retains the flat absorption at sufficiently higher values of ℏω similar to pristine graphene <cit.>. Plots of the real part of σ_yx^v for E_F=0 ( black dotdashed curve) and E_F≠ 0 (red and blue dashed curves) in the absence of Rashba SOC (λ_R=0) are shown in Fig. <ref>. In the dc limit, the expected value of the valley-Hall conductivity is obtained as can be seen in Fig. <ref> (black curve). If the system is illuminated by photons of frequency ω, the amplitude of the absorption peaks is suppressed for E_F=0, while an increase in it is observed for E_F≠ 0. For ℏω= 2 |λ| a strong valley-Hall response is observed for E_F≠ 0. Therefore, it can be expected that a stronger valley-Hall response may be accessible when the photon energy is tuned to the valley-Zeeman SOC. For ℏω > 2 |λ|, σ_yx^v decreases rapidly and approaches zero at sufficiently higher values of ℏω. The real part of the valley-Hall conductivity is shown in Fig. <ref> for several values of E_F. In the dc limit (ω=0), we obtain the quantized value of the valley-Hall conductivity (Reσ_yx^v=e/h) for E_F=0 (black curve in the upper panel). If the system is subjected to photon of frequency ω, an increase in the magnitude of the valley-Hall response is observed. The absorption peaks occur at the same onset energies as indicated in Fig. <ref>. For example, the first peak appeared when ℏω= 2Δ_1 or transition between the minima of the E_– and E_+- bands. Further, the change in sign of the conductivity is due to the Rashba SOC, which is responsible for the coupling between spin-up and spin-down bands e.g., the transition from the maximum of mexican hat of E_– band to the minimum of E_++ band around k=0. Furthermore, for finite values of E_F we obtain new features in the optical spectrum due to the emergence of new transitions such as E_+-→ E_++, e.g., some features are completely removed due to Pauli blocking. Also, the valley-Hall response is diminished at sufficiently high frequencies. However, in the case of λ_R<λ (lower panel), the difference among the optical transition energies is significantly enhanced due to larger values of λ and new features emerge at the momenta at which E_F crosses the E_+- band (see Fig. <ref>). Moreover, some of the optical transitions are no longer possible, e.g., E_–→ E_+- when E_F is just above the mexican hat because the states below it are occupied and, therefore, Pauli blocked (blue curve). In Fig. <ref> we plot σ_xx^d, from Eq. (<ref>), by evaluating it numerically versus electron concentration (n_e) and using the expression of τ given in Appendix A but evaluated at the Fermi level, k=k_F. The conductivity increases with E_F and therefore with the carrier density n_e. The diffusive conductivity increases linearly with n_e but cusp-like features appear when E_++ band begin to occupied at specific values of n_e in contrast to pristine graphene <cit.>. This behaviour makes graphene/WS_2 a suitable candidate for charge switches contrary to pristine graphene. The screening effect becomes significantly weaker when only the λ term is present. Moreover, the conductivity shown in Fig. <ref> increases in the low-density regime for λ=0 and λ_R≠ 0 as compared to the λ≠ 0, λ_R= 0 and λ, λ_R≠ 0 case. In the limit λ=λ_R=0 we obtain the result similar to pristine graphene <cit.>. § SUMMARY AND CONCLUSION We studied the energy dispersion of graphene/WSe_2 heterostructures by using a TB model in the presence of valley-Zeeman and Rashba SOCs. We found that the effective Hamiltonian (2) derived from the TB one (1) nicely captures the low-energy physics near the K and K^' valleys. We demonstrated that the density of states has a finite value around E=0 in both cases λ≠ 0, λ_R=0 and λ = 0, λ_R≠ 0. In addition, it has a square root singularity when both λ and λ_R are present. This singularity is similar to that in biased bilayer graphene; however, here it is due to the Rashba SOC whereas in biased bilayer graphene it is due to interlayer hopping. We also found that the ac and dc valley-Hall conductivities change sign in the presence of the λ_R term, which leads to interband transitions. Also, the band gap is enhanced by increasing the strength λ_R. Further, for λ_R >> λ the valley-Hall conductivity exhibits a behaviour similar to that in gapped graphene and topological insulators <cit.>. The screening effect in the diffusive conductivity is dominant only when the Rashba SOC is present, whereas it is significantly suppressed for λ≠ 0,λ_R= 0. Also, the conductivity increases with λ_R in the low- and high-density regimes, see Fig. <ref>. The dc valley-Hall conductivity changes sign when E_F is comparable to λ_R and vanishes at higher values of E_F, cf. Fig. 5. It also exhibits a strong temperature dependence when the Fermi level in the gap, cf. Fig. 6. The intraband response of the ac longitudinal conductivity for λ_R=0 (see upper panel of Fig. <ref>) shifts towards lower photon energies when E_F increases compared to λ_R≠ 0 (see lower panel of Fig. <ref> and Fig. <ref>). We also noted the switching on and off of the Drude response when the Fermi energy is varied (see Fig. <ref>), which may be of interest in technological applications. In addition, for λ,λ_R≠ 0 new onsets in the optical conductivity appear due to the shifting of the Fermi level through the mexican hat structure (see Figs. <ref> and <ref>), which may be a promising feature in optical experiments. Our findings may be pertinent to developing future spintronics and valleytronics devices such as field-effect tunnelling transistors, memory devices, phototransistors, etc. M. Z. and P. V. acknowledge the support of the Concordia University Grant No. VB0038 and a Concordia University Graduate Fellowship. The work of M. T. was supported by Colorado State University. * § RELAXATION TIME The relaxation time is generally a function of the incoming electron's wave vector and at low temperatures only states near the Fermi level will contribute to transport and single-particle properties. Below we provide expressions for the relaxation time at the Fermi energy in the limiting cases Δ, λ≠ 0, λ_R=0 and Δ, λ = 0, λ_R≠ 0, because in these cases the summation over final states can be performed analytically. Within the first Born approximation the standard formula for the momentum relaxation time has the form 1τ_ζ=1τ_ξ k^η = 2 π n_iℏ∑_ξ^', η^', k^'|⟨ξ, η, k | U(𝐫) |ξ^', η^', k^'⟩|^2δ(E_ξ k-E_ξ^'k^') (1-cosθ), where U(𝐫) is the impurity potential, n_i the impurity density, and θ the angle between the initial k and final k^' wave vectors. Equation  (<ref>) holds only for elastic scattering (ξ=ξ^',η=η^', k=k^') and for central potentials U(𝐫) i.e. U(𝐫)=U(r). The results for two types of impurity potentials are as follows. Short-range impurities. We have U(𝐫)= U_0δ(𝐫-𝐫_𝐢) where 𝐫 and 𝐫_𝐢 are the position vectors of the electron and impurity, respectively, and U_0 is the strength of potential. In this case U(𝐪)= U_0 is the Fourier transform of U(𝐫)= (1/√(L_xL_y))∑_q U(q) e^i 𝐪.𝐫 with |𝐪|= 2k sin(θ/2). The results are: i) λ_R=0 1τ_s k_F^η = V_0^2 n_i(N_s^η)^4ℏ√(Δ^2+ (ℏ v_F k)^2)(ℏ v_F)^2 [(A_s^η)^4- (A_s^η)^2+1]. In the limit Δ, λ=0, the above result reduces to graphene's scattering time Eq. (24) of Ref. <cit.> 1τ_ k_F =V_0^2 n_ik4 ℏ^2 v_F. Also, for λ=0 Eq. (<ref>) agrees with the result for topological insulators <cit.>. ii) Δ, λ=0 1τ_s k_F^η = V_0^2 n_i(N_s^η)^4√(λ_R^2+ (ℏ v_F k)^2)ℏ^3 v_F^2 [[(A_s^η)^2+(B_s^η]^2]^2+ (C_s^η)^4- [1+ (C_s^η)^2] [(A_s^η)^2+(B_s^η)^2] -2 (C_s^η)^2 +1]. Long-range impurities. We assume U(𝐫)= e Q e^-k_sr/4 πϵ_0ϵ r, where k_s is the screening wave vector, Q is the charge of the impurity, and ϵ the dielectric constant. In this case U(q)= 2π U_0/√(k_s^2+q^2) with U_0= eQ/4πϵ_0ϵ. The results are: i) λ_R=0 1τ_s k_F^η = V_0^2 n_i(N_s^η)^4√(Δ^2+ (ℏ v_F k)^2)2 ℏ^3 v_F^2 k^2 [[1+(A_s^η)^4][1-a_s√(a_s^2+1)]+ 2 (A_s^η)^2[ 2 a_s^2-a_s(2 a_s^2+1)√(a_s^2+1)]]. In the limit Δ=λ=0 we set a_s= k_s/2k and obtain the relaxation time in pristine graphene <cit.> 1τ_ k_F = V_0^2 n_i (a_s-√(a_s^2+1))^24 ℏ^2 v_F k. Moreover, for λ=0 Eq. (<ref>) gives the relaxation time for topological insulators <cit.>. ii) Δ, λ=0 1τ_s k_F^η = V_0^2 n_i(N_s^η)^4√(λ_R^2+ (ℏ v_F k)^2)2 ℏ^3 v_F^2 k^2 [1+[(A_s^η)^2+(B_s^η)^2]^2 + (C_s^η)^4 [1-a_s√(a_s^2+1)] + 2 [1+ (C_s^η)^2] [A_s^η)^2+(B_s^η)^2] [2 a_s^2-a_s(2 a_s^2+1)√(a_s^2+1)] - 2 (C_s^η)^2[ 1+a_s√(a_s^2+1) + 8 a_s^3√(a_s^2+1) -8 a_s^2(2a_s^2+1) ] ]. 99 r1 A. H. C. Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009). r2 N. Tombros, C. Jozsa, M. Popinciuc, H. T. Jonkman, and B. J. van Wees, Nature 448, 571 (2007). r3 J. Ingla-Aynés, M. H. D. Guimarães, R. J. Meijerink, P. J. Zomer, and B. J. van Wees, Phys. Rev. B 92, 201410(R) (2015). r4 M. Drögeler, C. Franzen, F. Volmer, T. Pohlmann, L. Banszerus, M. Wolter, K. Watanabe, T. Taniguchi, C. Stampfer, and B. Beschoten, Nano Lett. 16, 3533 (2016). r5 C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005). r6 Z. Qiao, S. A. Yang, W. Feng, W.-K. Tse, J. Ding, Y. Yao, Y. Wang and Q. Niu, Phys. Rev. B 82, 161414 (2010); C.-X. Liu, S.-C. Zhang and X.-L. Qi, Annu. Rev. Condens. Matter Phys. 7, 301 (2016); Y. Ren, Z. Qiao and Q. Niu, Rep. Prog. Phys. 79, 066501 (2016); H. Weng, R. Yu, X. Hu, X. Dai and Z. Fang, Adv. Phys. 64, 227 (2015). r7 A. H. C Neto, and F. Guinea, Phys. Rev. Lett. 103, 026804 (2009). r8 C. Weeks, J. Hu, J. Alicea, M. Franz and R. Wu, Phys. Rev. X 1, 021001 (2011). r9 J. Ding, Z. Qiao, W. Feng, Y. Yao and Q. Niu, Phys. Rev. B 84, 195444 (2011). r10 J. Hu, J. Alicea, R. Wu and M. Franz, Phys.Rev. Lett. 109, 266801 (2012). r11 D. Ma, Z. Li and Z. Yang, Carbon 50, 297 (2012). r12 K.-H. Jin and S.-H. Jhi, Phys. Rev. B 87, 075442 (2013). r13 A. Ferreira, T. G. Rappoport, M. A. Cazalilla and A. H. C. Neto, Phys. Rev. Lett. 112, 066601 (2014). r14 A. A. Kaverzin and B. J. van Wees, Phys. Rev. B 91, 165412 (2015). r15 J. Balakrishnan, G. K. W. Koon, M. Jaiswal, A. H. C. Neto and B. C. Ozyilmaz, Nat. Phys. 9, 284 (2013). r16 X. Hong, S.-H. Cheng, C. Herding, and J. Zhu, Phys. Rev. B 83, 085410 (2011). r17 Z. Jia, B. Yan, J. Niu, Q. Han, R. Zhu, D. Yu and X. Wu, Phys. Rev. B 91, 085411 (2015). r18 U. Chandni, E. A. Henriksen and J. P. Eisenstein, Phys. Rev. B 91, 245402 (2015). rr19 Y.-C. Lin, N. Lu, N. Perea-Lopez, J. Li, Z. Lin, X. Peng, C. H. Lee, C. Sun, L. Calderin, P. N. Browning, M. S. Bresnehan, M. J. Kim, T. S. Mayer, M. Terrones, and J. A. Robinson, ACS Nano 8, 3715 (2014). rr20 M.-Y. Lin, C.-E. Chang, C.-H. Wang, C.-F. Su, C. Chen, S.-C. Lee, and S.-Y. Lin, Appl. Phys. Lett. 105, 073501 (2014). rr21 A. Azizi, S. Eichfeld, G. Geschwind, K. Zhang, B. Jiang, D. Mukherjee, L. Hossain,A. F. Piasecki, B. Kabius, J. A. Robinson, and N. Alem, ACS Nano 9, 4882 (2015). rr22 Y. Kim, D. Choi, W. J. Woo, J. B. Lee, G. H. Ryu, J. H. Lim, S. Lee, Z. Lee, S. Im, J.-H. Ahn, W.-H. Kim, J. Park, and H. Kim, Appl. Surf. Sci. 494, 591 (2019). rr23 A. M. Alsharari, M. M. Asmar, and S. E. Ulloa, Phys. Rev. B 98, 195129 (2018). rr24 L. A. Benitez, J. F. Sierra, W. S. Torres, A. Arrighi, F. Bonnell, M. V. Costache, and S. O. Valenzuera, Nat. Phys. 14, 303 (2018). rr25 A. W. Cummings, J. H. Garcia, J. Fabian, and S. Roche, Phys. Rev. Lett. 119, 206601 (2017). r19 W. Han, R. K. Kawakami, M. Gmitra, and J. Fabian, Nat. Nanotechnol. 9, 794 (2014). r20 K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Phys. Rev. Lett. 105, 136805 (2010). r21 A. Kormányos, G. Burkard, M. Gmitra, J. Fabian, V. Zólyomi, N. D. Drummond, and V. Falḱo, 2D Mater. 2, 022001 (2015). r22 C.-P. Lu, G. Li, K. Watanabe, T. Taniguchi, and E. Y. Andrei, Phys. Rev. Lett. 113, 156804 (2014). r23 S. Larentis, J. R. Tolsma, B. Fallahazad, D. C. Dillen, K. Kim, A. H. MacDonald, and E. Tutuc, Nano Lett. 14, 2039 (2014). nnn1 L. Banszerus,T. Sohier, A. Epping,F. Winkler, F. Libisch, F. Haupt, K. Watanabe, T. Taniguchi, K. Muller-Caspary, N. Marzari, F. Mauri, B. Beschoten, and C. Stampfer, arXiv: 1909.09523. r24 S. Bertolazzi, D. Krasnozhon, and A. Kis, ACS Nano 7, 3246 (2013). r25 K. Roy, M. Padmanabhan, S. Goswami, T. P. Sai, G. Ramalingam, S. Raghavan, and A. Ghosh, Nat. Nanotechnol. 8, 826 (2013). r26 W. Zhang, C.-P. Chuu, J.-K. Huang, C.-H. Chen, M.-L. Tsai, Y.-H. Chang, C.-T. Liang, Y.-Z. Chen, Y.-L. Chueh, J.-H. He, M.-Y. Chou, and L.-J. Lib, Sci. Rep. 4, 3826 (2014). r27 N. A. Kumar, M. A. Dar, R. Gul, and J. Baek, Mater. Today 18, 286 (2015). rr26 L. Britnel, R. V. Gorbachev, R. Jalil, B. D. Belle, F. Schedin, A. Mishchenko, T. Georgiou, M. I. Katsnelson, L. Eaves, S. V. Morozov, N. M. R. Peres, J. Leist, A. K. Geim1, K. S. Novoselov and L. A. Ponomarenko, Science 335, 947 (2012). rr27 A. Mishchenko, J. S. Tu, Y. Cao, R. V. Gorbachev, J. R. Wallbank, M. T. Greenaway, V. E. Morozov, S. V. Morozov, M. J. Zhu, S. L. Wong, F. Withers, C. R. Woods, Y-J. Kim, K. Watanabe, T. Taniguchi, E. E. Vdovin, O. Makarovsky, T. M. Fromhold, V. I. Fal’ko, A. K. Geim, L. Eaves and K. S. Novoselov, Nat. Nanotechnol. 9, 808 (2014). rr28 K. Roy, M. Padmanabhan, S. Goswami, T. P. Sai, G. Ramalingam, S. Raghavan and A. Ghosh, Nat. Nanotechnol. 8, 826 (2013). nnn2 A. David, P. Rakyta, A. Kormányos, and Guido Burkard, Phys. Rev. B 100, 085412 (2019). nnn3 Y. Li and M. Koshino, Phys. Rev. B 99, 075438 (2019). r28 A. Avsar, J. Y. Tan, T. Taychatanapat, J. Balakrishnan, G. K. W. Koon, Y. Yeo, J. Lahiri, A. Carvalho, A. S. Rodin, E. C. T. OFarrell, G. Eda, A. H. C. Neto and B. Özyilmaz, Nat. Commun. 5, 4875 (2014). r29 M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl, and J. Fabian, Phys. Rev. B 80, 235431 (2009). r30 C. K. Safeer, J. Ingla-Aynés, F. Herling, J. H. Garcia, M. Vila, N. Ontoso, M. Reyes Calvo, S. Roche, L. E. Hueso, and F. Casanova, Nano Lett. 19, 1074 (2019). r31 B. Yang, M.-F. Tu, J. Kim, Y. Wu, H. Wang, J. Alicea, R. Wu, M. Bockrath and J. Shi1, 2D Mater. 3, 031012 (2016). rr31 S. Zihlmann, A. W. Cummings, J. H. Garcia, M. Kedves, K. Watanabe, T. Taniguchi, C. Schonenberger, and P. Makk, Phys. Rev. B 97, 075434(R) (2018). rr32 Z. Wang, D. K. Ki, H. Chen, H. Berger, A. H. MacDonald, and A. F. Morpurgo, Nat. Commun. 6, 8339 (2015). st Jose H. Garcia, Marc Vila, Aron W. Cummings, and Stephan Roche, Chem. Soc. Rev. 47, 3359 (2018); A. Mreńca-Kolasińska, B. Rzeszotarski, and B. Szafran, Phys. Rev. B 98, 045406 (2018). mt T. Völkl, T. Rockinger, M. Drienovsky, K. Watanabe, T. Taniguchi, D. Weiss, and J. Eroms, Phys. Rev. B 96, 125405 (2017); B. Yang, E. Molina, J. Kim, D. Barroso, M. Lohmann, Y. Liu, Y. Xu, R. Wu, L. Bartels, K. Watanabe, T. Taniguchi, and Jing Shi, Nano Lett. 18, 3580 (2018). rr33 M. Gmitra, D. Kochan, P. Högl, and J. Fabian, Phys. Rev. B 93, 155104 (2016). rr34 M. Gmitra and J. Fabian, Phys. Rev. B 92, 155403 (2015). r32 M. Gmitra, D. Kochan, and J. Fabian, Phys. Rev. Lett. 110, 246602 (2013). nnn7 Z. Wang, D.-K. Ki, J. Y. Khoo, D. Mauro, H. Berger, L. S. Levitov, and A. F. Morpurgo1, Phys. Rev. X 6, 041020 (2016). r34 M. Charbonneau, K. M. Van Vliet, and P. Vasilopoulos, J. Math. Phys. 23, 318 (1982). r37 D. Xiao, W. Yao, and Q. Niu, Phys. Rev. Lett. 99, 236809 (2007). nnn6 A. Rycerz, J. Tworzydlo, and C. W. J. Beenakker, Nat. Phys. 3, 172 (2007). r35 Z. Li and J. P. Carbotte, Phys. Rev. B 86, 205425 (2012). r36 V. P. Gusynin, S. G. Sharapov, and J. P. Carbotte, Phys. Rev. Lett. 96, 256802 (2006). r38 V. Vargiamidis and P. Vasilopoulos, J. Appl. Phys. 116, 063713 (2014). r39 D. S. L. Abergel and V. I. Falko, Phys. Rev. B 75, 155430 (2007). r40 E. J. Nicol and J. P. Carbotte, Phys. Rev. B 77, 155409 (2008). r41 T. Stauber, N. M. R. Peres, and F. Guinea, Phys. Rev. B 76, 205423 (2007). r42 K. Nomura and A. H. MacDonald, Phys. Rev. Lett. 98, 076602 (2007). r43 A. A. Patel and S. Mukerjee, Phys. Rev. B 86, 075411 (2012).
http://arxiv.org/abs/2307.00721v2
20230703030022
Neural Polytopes
[ "Koji Hashimoto", "Tomoya Naito", "Hisashi Naito" ]
cs.LG
[ "cs.LG", "cs.GR", "hep-th", "math.GT" ]
[ Neural Polytopes Koji Hashimotoyyy Tomoya Naitoyyy3,yyy4 Hisashi Naitoyyy2 yyyDepartment of Physics, Kyoto University, Kyoto, Japan yyy3RIKEN Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), Wako, Japan yyy4Department of Physics, The University of Tokyo, Tokyo, Japan yyy2Graduate School of Mathematics, Nagoya University, Nagoya, Japan Koji [email protected] Polytopes, polyhedra, polygons 0.3in ] We find that simple neural networks with ReLU activation generate polytopes as an approximation of a unit sphere in various dimensions. The species of polytopes are regulated by the network architecture, such as the number of units and layers. For a variety of activation functions, generalization of polytopes is obtained, which we call neural polytopes. They are a smooth analogue of polytopes, exhibiting geometric duality. This finding initiates research of generative discrete geometry to approximate surfaces by machine learning. § INTRODUCTION The approach humans have taken to modeling nature is to approximate smooth curved surfaces in nature with linear objects such as planes. This is natural in terms of recognizing natural objects composed of smooth curves, since the simplest solution to the equations of motion in physics is linear motion with constant velocity. In ancient Greece, polygons were discovered as a way to approximate a circle by straight line segments, and polyhedra as a way to approximate a sphere by pieces of planes, and in particular, it was assumed by Plato that the existence of only a finite number of regular polyhedra was the fundamental understanding of nature. Thus, approximating rotationally symmetric objects (circles and spheres) with piecewise linear functions is at the root of modeling nature. Machine learning, on the other hand, is known to be able to approximate any function if the network architecture is multi-unit multilayer, as stated in the universal approximation theorem <cit.>. In particular, taking the activation function to be a step function or ReLU can be identified with approximation by a piecewise constant or piecewise linear function. Therefore, the modeling of approximation of natural phenomena by neural networks should naturally lead us to the rediscovery of discrete geometry, when we go back to the motivation in ancient Greek times and target the rotationally symmetric objects. Discrete geometry is the research field in mathematics to develop methods for discretizing smooth surfaces, whose application ranges from computer graphics <cit.> to quantum physics. To cite an example of the latter, the fundamental goal of completing the quantum theory of gravity in physics is to discretely approximate and quantize a smoothly curved surface called spacetime <cit.>. Thus, the development of methods in discrete geometry is long overdue. In this study, a sphere is approximated by a neural network function, as a first numerical experiment to bridge discrete geometry and machine learning and to explore possible visualization of trained functions. As we describe below, for the choice of the activation function as ReLU, polygons and polytopes are naturally generated. When we allow other activation functions, we obtain infinite families of generalization of polytopes — which we call neural polytopes. § METHOD A plane in the Euclidean d-dimensional space spanned by the coordinate (x_1, …, x_d) is parameterized as ∑_i a_i x_i = 1, where a_i (i=1, …, d) are real constant parameters. Polyhedra are nothing but a generalization of this equation to a piecewise linear function. Note that the right-hand side of (<ref>) needs to be fixed to be the unity; otherwise, we have to require some affine quotient. Now, we notice that any feed-forward deep neural network with the ReLU activation function without the bias is exactly of the form of the left-hand side of (<ref>). Therefore, let us prepare a deep neural network architecture with N intermediate fully-connected layers with (n_1, n_2, …, n_N) units each, and with the ReLU activation function. The input layer consists of d units whose input is just the coordinate x⃗=(x_1, …, x_d). For simplicity, the output layer is taken to be a summation layer which sums the values of the n_N units at the last intermediate layer. We force all biases equal to be zero. See Fig. <ref> for the architecture. The preparation of the training data is straightforward; since we aim to approximate the (d-1)-sphere, we produce a set of random points on the (d-1)-sphere in the Cartesian coordinates, and produce the data set D of the form D≡{x⃗^(i)→ 1 |x⃗^(i)∈ S^d-1}. That is, the input is a random point on the S^d-1, and the output is the unity. The activation function φ(x) other than ReLU, φ(x) = | x |^p with a positive real constant p, gives geometrically symmetric neural network functions, as it respects the reflection symmetry x → -x. This paper focuses on the results with (<ref>) for a better symmetric approximation of spheres. Note that choosing p=2 results in the complete reproduction of the original sphere (circle), since the equation to define the sphere is of the symmetric quadratic functions, ∑_i (x_i)^2 = 1. For the training, we produce roughly 10000 random points on the sphere, and use the ADAM optimizer with batch size 1000. 10000 epochs are enough for the training, as in this study we focus on very small architecture to see the discreteness of the neural network functions.[We do not show the explicit values of the loss functions after the training, as our neural network architecture is quite small in size and the minimum of the loss function landscape is expected to be unique, in all of our examples, up to the trivial flat directions generated by the rotation.] After the training, we plot the cross section defined by f(x_i) = 1, where f(x_i) is the trained neural network function. We call this cross section “neural polytopes.” We name the produced polytopes as d-polytope of type (n_1, …, n_N; p_1, …, p_N), where d is the spatial dimension of the minimal Euclidean space in which the polytope is embedded (i.e. the number of input units), and (n_1, …, n_N) refers to the number of units in each of the N intermediate layers, and (p_1, …, p_N) is the power appearing in the activation function | x |^p_i at each layer. When p_1 = ⋯=p_N (=p), we just call it type (n_1, …, n_N; p). § RESULTS In this report, we concentrate on the symmetric activation function (<ref>) to find rather symmetric neural polytopes.[Numerical training of the neural networks was done using Mathematica. The plots are numerical solutions of (<ref>) in spherical coordinates.] This also serves to test the “rediscovery" of regular polygons and polyhedra for our choice p=1 in the activation function. §.§ Polygons, polyhedra and polytopes First, we report our results for p=1. In Fig. <ref>, we show the results of the neural polygons. This is the 2-polytopes of type (n;1). It is amusing to find that 2n-sided regular polygons known for thousands of years are reproduced beautifully, thus establishing the map between the neural network architecture and the regular polygons. Next, we list the results of neural polyhedra. These are the neural 3-polytopes of type (n;1), see Fig. <ref>. The type (3;1) is the octahedron, which is one of the regular polyhedra. The type (4;1) is the cuboctahedron, and the type (6;1) is the icosidodecahedron, both of which are of the Archimedean solids. It is again amusing that various truncated polyhedra show up naturally. Deeper neural network approximates the sphere more efficiently, and in Fig. <ref> we list the results of neural 3-polytope of type (n,2;1). Compared with type (n;1) with sharing the same n, we find that deeper network approximate the sphere better. In Fig. <ref>, we provide a higher-dimensional example: a neural 5-polytope of type (8;1). The plot is various 3-dimensional slices of the 5-polytope. §.§ Neural polytopes Next, let us turn to the case p≠ 1. Interestingly, neural polytopes with p≠ 1 are spiky (p<1) or round (1<p<∞) generalizations of the ordinary polytopes (p=1). In the limit p →∞, the neural polytopes become ordinary polytopes different from those for p=1. At p=2, the neural polytopes are spheres. First, in Fig. <ref>, we show neural 2-polytopes (neural polygons) of type (2;1) with p=0.8, 1.0, 1.2, 1.5, 2.0, 3.0, 5.0 and 10.0. At p=1.0 the neural polygon is a square. Increasing p makes the edge vertex rounded. At p=2 the neural polygon is a circle. Then, at p=∞ the neural polygon becomes again a square. The edge shape of the neural polygons is actually identical to the shape of the activation function |x|^p. Neural 3-polytopes (neural polyhedra) are plotted in Figs. <ref>, <ref>, and <ref>. The behavior in change of the value of p is quite similar to that of the neural polygons. For the neural polyhedra of type (3;p), the p=1 neural polyhedron is an octahedron, while p=∞ is a cube. Interestingly, these are dual polyhedra. So the p=1 and p=∞ neural polyhedra provide a well-known duality among polytopes. This phenomenon is again natural in the sense that the activation function | x | produces a kink at x=0, while | x |^∞ is flat around x=0 and has kinky divergence at x=± 1. The neural polyhedra of type (3;p) interpolates a polyhedron and a dual polyhedron, in between which a sphere appears. This duality also applies to the neural polygons of type (2;p).[The neural polyhedra of type (4;p) or (5;p) shown in Fig. <ref> or <ref> do not exhibit the precise duality, although close to it (the resultant discrete symmetries seem to be kept for generic values of p).] § SUMMARY Polygons, polyhedra, and polytopes — the fundamental objects in geometry — were found to be a particular type of neural networks with linear activations and no bias. This amounts to visualization of an approximation of a sphere by neural network functions. We introduced neural polytopes, which are the natural consequence of generic activations. They round off edges of the ordinary polytopes, and include automatic interpolation of dual polytopes. The neural polytopes open the possibility of bridging discrete geometry and machine learning, and of even generalizing the discrete geometry, for geometric engineering of the nature. § RELATED WORK The geometric interpretation of ReLU networks as linearly segmented regions was studied in <cit.>. The expressibility/complexity analysis of deep neural networks with piecewise linear activations is found in <cit.>, where the notion of partitioned regions are used. The number of faces of our polytopes can be estimated in the large network approximation. Polyhedral theory was used <cit.> for the complexity analysis. The reverse engineering of ReLU networks was studied from the geometric viewpoint in <cit.>. The polytope interpretation of the semanticity of neural networks was explored recently in <cit.>. Our work focuses on the geometric and symmetric features of visualized neural network functions and the connection to discrete geometry. Supervised learning of features of lattice polytopes was performed in <cit.> and a genetic algorithm was used to generate lattice reflexive polytopes <cit.> for the analysis of Calabi-Yau manifolds. § BROADER IMPACT Polytopes are the fundamental objects in discrete geometry, whose applications range from computer graphics to engineering and physics. Our finding bridges the discrete geometry with neural networks directly, with which we expect automatic generation of discrete geometries from point data of natural surfaces, together with a deeper mathematical understanding of neural network functions. This work unites philosophically the distinct two ideas of the human visual recognition — the polytopes to model the nature by the ancient Greek mathematics of Archimedes, and the neural network functions to model any shape in the nature. This occurrence could be called a societal Poincaré recurrence. § ACKNOWLEDGEMENTS K. H. is indebted to Koji Miyazaki for valuable discussions, education and support. The work of K. H. was supported in part by JSPS KAKENHI Grant Nos. JP22H01217, JP22H05111, and JP22H05115. The work of T. N. was supported in part by JSPS KAKENHI Grant Nos. JP22K20372, JP23H04526, JP23H01845, and JP23K03426. The work of H. N. was supported in part by JSPS KAKENHI Grant Nos. JP19K03488 and JP23H01072. synsml2023
http://arxiv.org/abs/2307.02115v1
20230705084303
Selective inference for fMRI cluster-wise analysis, issues, and recommendations for template selection: A comment on Blain et al
[ "Angela Andreella", "Anna Vesely", "Weeda Wouter", "Jesse Hemerik", "Jelle Goeman" ]
stat.AP
[ "stat.AP" ]
Physics-assisted Deep Learning for FMCW Radar Quantitative Imaging of Two-dimension Target Zhuoyang Liu^*, Graduate Student Member, IEEE, Huilin Xu^*, Graduate Student Member, IEEE, Feng Xu^*, Senior Member, IEEE ^* Key Lab for Information Science of Electromagnetic Wave (MoE), Fudan University, Shanghai 200433, China, Email:{liuzy20,fengxu}@fudan.edu.cn, [email protected] ==================================================================================================================================================================================================================================================================================================================== Two permutation-based methods for simultaneous inference have recently been published on the proportion of active voxels in cluster-wise brain imaging analysis: Notip <cit.> and pARI <cit.>. Both rely on the definition of a critical vector of ordered p-values, chosen from a family of candidate vectors, but differ in how the family is defined: computed from randomization of external data for Notip, and chosen a-priori for pARI. These procedures were compared to other proposals in literature but, due to the parallel publication process, an extensive comparison between the two is missing. We provide such a comparison, highlighting that pARI can outperform Notip if the settings are selected appropriately. However, each method carries different advantages and drawbacks. In recent years, post hoc inference on the True Discovery Proportion (TDP) has become a popular inferential method with applications in several fields. In neuroimaging, procedures for post hoc TDP inference provide lower confidence bounds on the proportion of active voxels within clusters, simultaneously over all possible clusters. As simultaneity makes the confidence bounds valid even under post hoc selection, such methods allow addressing the well-known spatial specificity paradox in cluster-wise analysis. We discuss the procedure proposed in the recently published paper titled “Notip: Non-parametric true discovery proportion control for brain imaging" by <cit.>. The method (Notip) is compelling and thought-provoking; however, an alternative method (pARI) has been developed simultaneously with Notip <cit.>, and, due to the parallel publication process, a proper comparison between Notip and pARI is lacking in both papers. In this note, we add a first comparison. Most procedures for post hoc TDP inference construct the confidence bounds using a vector of ordered p-values, which is compared to a critical vector (template). The choice of the template is critical, as it determines the power of the inference; several proposals have been made in the literature using either parametric or permutation-based approaches. Both Notip and pARI rely on permutations: they start by defining a collection of candidate templates, then select the optimal template from this family (calibration) using randomization of the data. This process allows to incorporate the unknown spatial correlation structure of voxels in the choice of the critical vector and so to gain power compared to parametric methods. The main difference between the two considered methods is in the definition of the family of candidate templates. In the standard Notip approach, the family is calculated using randomization of external data, which can be either additional subjects from the same experiment or even data from a different experiment. A variant of the method uses the same data both for the family and for inference, but this variant lacks mathematical proof of validity. On the contrary, pARI requires choosing the family a-priori. The family is characterized by the shape of the templates, which can be derived from different methods such as Simes inequality and higher criticism. For neuroimaging data, <cit.> recommend choosing a Simes-based family of the form l_i(λ_α) = (i - δ) λ_αm - δ (i=1,…,m) where λ_α∈ℝ^+ is the calibration parameter that indexes the family, δ∈{0,1,…,m-1} is the shift parameter and m is the total number of voxels. Regarding the shift, they suggest choosing δ=1 in the most general case; however, when interest lies in large clusters, a higher value is advised (e.g., δ=27 was used in neuroimaging data analysis). Therefore, the shift parameter strongly influences the power of the method and can be interpreted as the minimum cluster size that researchers are interested in detecting. The case for the use of the shift parameter when computing confidence bounds for the TDP was also argued by <cit.> and <cit.>. Notip and pARI were individually compared with two other procedures that make post hoc inference on the TDP, both using templates based on Simes inequality: All-Resolutions Inference (ARI) <cit.> and calibrated Simes (sansSouci) <cit.>. The first is a parametric method, while the second is based on random permutations like Notip and pARI. Notip and pARI proved to be more powerful than the other methods when applied to real and simulated data. Moreover, pARI is presented in two ways: a fast single-step version, similar to sansSouci, and an iterative version that uniformly improves both the single-step version and sansSouci. <cit.> claim that a comparison with pARI is immediate, as the corresponding bounds are equivalent to those obtained with sansSouci. However, the latter is analogous only to the single-step version of pARI and, most importantly, does not account for the shift. Indeed, the corresponding family in sansSouci can be obtained from (<ref>) fixing δ=0. Therefore, Notip was only compared with the unshifted version of pARI but not with the recommended setting. Since interest both in real and in synthetic data considered by <cit.> was in large clusters, a high value of δ can be expected to be more powerful for pARI as suggested by <cit.>. We compare Notip and pARI with different values of δ on the same real data used in <cit.>, that can be obtained from http://neurovault.org/collections/1952 and pre-processed using the Python code made available by the Authors at <https://github.com/alexblnn/Notip>. The following table shows results from Table 2 in <cit.>, to which we added results for single-step Simes-based pARI with δ∈{0,1,3,9,27}. The analysis was carried out using the corresponding package from <https://CRAN.R-project.org/package=pARI>. This preliminary study shows that the addition of the shift parameter significantly increases the power of the method. There is already an evident gain with δ=1, for which results are competitive with Notip. For higher values of the parameter, pARI is more powerful than Notip in all clusters. Finally, results appear to be, to an extent, fairly robust against the choice of δ, as long as a positive value is considered. We can conclude that the shifted version of Simes-based pARI performs remarkably well and, in most cases, surpasses the Notip approach as long as the shift parameter avoids excessively small values. This observation aligns with the recommendations of <cit.>, emphasizing the importance of choosing an appropriate template (and shift value) for gaining power. In conclusion, Notip and pARI can be seen as complementary methods in the context of fMRI data analysis. pARI has the potential to outperform Notip if the shift parameter δ is appropriately selected. Although the results remain reasonably robust to the choice of the shift, the presence of an additional parameter can be considered a drawback, especially since it has to be chosen before seeing the data. On the other hand, Notip does not involve the setting of this additional parameter but faces the drawback of either requiring external data that are sufficiently comparable, or sacrificing a portion of the data to learn the template and compromising power. Moreover, the double rounds of randomization in Notip make the procedure more computationally expensive. Further analyses are needed to study the power properties of each method and so determine which one should be preferred in different settings. § ACKNOWLEDGEMENTS Angela Andreella gratefully acknowledges financial support by Ca’ Foscari University of Venice via Grant No. PON 2014-2020/DM 1062. Anna Vesely acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) via Grant No. DI 1723/5-3. apalike
http://arxiv.org/abs/2307.02752v2
20230706032219
Offline Reinforcement Learning with Imbalanced Datasets
[ "Li Jiang", "Sijie Chen", "Jielin Qiu", "Haoran Xu", "Wai Kin Chan", "Zhao Ding" ]
cs.LG
[ "cs.LG", "cs.AI" ]
[ Offline Reinforcement Learning with Imbalanced Datasets Li Jiangthu-sigs Sijie Chengthu-cs,thu-air Jielin Qiucmu Haoran Xuthu-air WaiKin Chanthu-sigs Ding Zhaocmu thu-csDeptartment of Computer Science and Technology, Tsinghua University thu-sigsShenzhen International Graduate School, Tsinghua University thu-airInstitute for AI Industry Research, Tsinghua University cmuCarnegie Mellon University Li [email protected] Machine Learning, ICML, Deep Reinforcement Learning, Imitation Learning, Batch Reinforcement Learning, Off-Policy 0.3in ] The prevalent use of benchmarks in current offline reinforcement learning (RL) research has led to a neglect of the imbalance of real-world dataset distributions in the development of models. The real-world offline RL dataset is often imbalanced over the state space due to the challenge of exploration or safety considerations. In this paper, we specify properties of imbalanced datasets in offline RL, where the state coverage follows a power law distribution characterized by skewed policies. Theoretically and empirically, we show that typically offline RL methods based on distributional constraints, such as conservative Q-learning (CQL), are ineffective in extracting policies under the imbalanced dataset. Inspired by natural intelligence, we propose a novel offline RL method that utilizes the augmentation of CQL with a retrieval process to recall past related experiences, effectively alleviating the challenges posed by imbalanced datasets. We evaluate our method on several tasks in the context of imbalanced datasets with varying levels of imbalance, utilizing the variant of D4RL. Empirical results demonstrate the superiority of our method over other baselines. § INTRODUCTION Offline reinforcement learning (RL), known as batch RL, <cit.> holds great promise in learning high-quality policies from previously logged datasets, without further interaction with the environment and collect trajectories that may be dangerous or expensive <cit.>. Such promise makes real-world RL more realistic and enables better generalization abilities by incorporating diverse previous experiences <cit.>. To mitigate the fundamental challenge offline RL, i.e., distributional shift <cit.>, most current studies generally enforce pessimism on policy updates <cit.>, or value updates <cit.>. The pessimism explicitly or implicitly constrains the learned policy to the behavior policy and has been shown to be effective in standard benchmarks in <cit.>. However, it should be noted that while real-world distributions are rarely uniform, the datasets used in these benchmarks often feature near-uniform state coverage and near-balanced policies. While the imbalanced dataset has been widely studied in supervised learning <cit.>, it remains under-explored in the offline RL community even though the static dataset is the only source to extract policies. Real-world datasets in offline RL are often imbalanced mainly due to the challenge of exploration or safety consideration, consisting of skewed policies. Real-world datasets in offline RL exhibit imbalanced and follow Zipf's law <cit.> across the entire state space, with skewed policies (see <Ref>). The states in the dataset from sufficient to insufficient coverage are characterized by varying policies from mixture to expert ones. As an example, for safety considerations in industrial control systems, certain adjustments to key parameters may be rare but near-optimal, while other trivial adjustments may be more common but inferior. Our study shows that most of the existing offline RL algorithms perform poorly when faced with imbalanced datasets. We then provide a provable explanation for this failure, which unfolds in two dimensions. Firstly, state-agnostic pessimism in most current approaches fail to guarantee policy improvement. Secondly, uniform transition sampling during the course of training may lead to large temporal-difference (TD) errors in off-policy evaluation for states with poor coverage due to inefficient sampling. Ironically, our empirical investigation exposes that re-sampling methods like Prioritized Experience Replay (PER, as outlined in <cit.>) may lead to worse performance in imbalanced offline RL datasets, serving as an amplifier for the distributional shift problem. Motivated by natural intelligence, which avoids forgetting rare phenomena by recalling related information from past experiences <cit.>, we introduce a new offline RL algorithm to overcome those limitations by augmenting standard offline RL algorithms, e.g., CQL <cit.>, with a retrieval process. Our proposed method, retrieval-based CQL (RB-CQL), enables agents to effectively utilize related experiences through nearest-neighbor matching from diverse and large-scale datasets, and to directly inform their actions, particularly for states with poor coverage. Our algorithm is evaluated on a series of tasks with imbalanced datasets in the variant of D4RL <cit.> and shows the competition compared with state-of-the-art offline RL methods. Note that RB-CQL represents a straightforward approach to alleviate offline RL in the context of real-world imbalanced datasets. We hope this work will inspire future research on real-world RL systems utilizing practical datasets, and serve as a foundation for large-scale offline RL. § PRELIMINARIES RL problem is typically characterized by a Markov Decision Process (MDP) <cit.>, which is specified by a tuple ℳ=⟨𝒮, 𝒜, P, r, ρ, γ⟩ consisting of a state space 𝒮, an action space 𝒜, a transition probability function P(s'|s,a), a reward function r(s,a) ∈ℝ, an initial state distribution ρ, and the discount factor γ∈ [0, 1). The object of RL is to extract a policy π: 𝒮→𝒜 that maximizes the expectation of the sum of discounted reward, known as the return J(π)= 𝔼_π[∑_t=0^∞γ^t r(s_t, a_t)]. Based on approximate dynamic programming, off-policy RL methods typically learn a state-action value function (Q-function), characterized as Q^π(s, a): 𝒮×𝒜→ℝ with policy π, the discounted return when the trajectory starts with (s, a) and all remaining actions are taken via π. For a given policy π, the Q-function can be obtained through the Bellman operator 𝒯^π: ℝ^𝒮×𝒜→ℝ^𝒮×𝒜, defined as : (𝒯^π Q)(s, a):=r(s, a)+γs^'∼ P(·| s, a) a' ∼π(·| s')𝔼[Q(s^', a')], where Q^π(s, a) is a unique fixed point with the contracted Bellman operator for γ∈ [0, 1) <cit.>. The optimal Q-function is obtained via Q^*= max_πQ^π(s,a), which aligns with an optimal policy achieved through greedy action choices. Another important concept is the notion of discounted state occupancy, d_π∈Δ(𝒮), defined as d_π(s):=(1-γ) 𝔼_π[∑_t=0^∞γ^t 1[s_t=s]], which characterizes a distribution over the states visited by the policy. We narrow our focus to offline RL settings, which aim to learn an optimal policy from a pre-collected dataset 𝒟, generated by some unknown behavior policy β. The most critical problem in offline RL is that the training policy π may generate out-of-distribution action a' that causes erroneous estimation error of Q(s', a') and propagate through the Bellman update, often leading to catastrophic failure to the policy learning. Most recent offline RL methods alleviate the above issue by introducing pessimism either on policy learning <cit.> or value learning <cit.>. Most of those methods can be formulated into a generic formulation of distribution-constrained offline RL algorithms <cit.>, either explicitly or implicitly. The goal of it is to maximize the return of the learned behavior policy π while minimizing the divergence D(π, β) weighted by a hyperparameter α: max _π𝔼_s ∼ d^π[J(π)-α D(π, β)(s)]. Conservative Q-learning. <cit.> enforces pessimism on the value update, based on the actor-critic framework, which can be written as: min_θα (𝔼_s ∼𝒟, a ∼ρ[Q_θ (s, a)] - 𝔼_s ∼𝒟, a ∼β[Q_θ (s, a)]) + 1/2𝔼_(s, a, s') ∼𝒟[(Q_θ(s, a) - 𝒯^πQ̅_θ'(s,a))^2]. The second term is the standard Bellman updates error <cit.> and Q̅_θ represents the target Q-value with parameters θ', which is updated to the current network with parameters θ by averaging θ' ←τθ + (1-τ)θ' for some small τ. Instead of giving pessimism on policy learning by choosing in-distribution actions, CQL enforces pessimism on value learning (the first term) by minimizing the Q-values under a specific policy distribution ρ, which is characterized by the high Q-values, and balancing by maximizing the Q-values of from the behavior policy β. Note that CQL implicitly constrains the learned policy π over the behavior policy β (see Eq. (13) in <cit.>). The hyperparameter α controls the degree of pessimism. A high value of α corresponds to a high level of pessimism, indicating that the optimal policy is highly similar to the behavior policy β, while a low value of α corresponds to a low level of pessimism. The pessimism level, i.e., α, is a constant for every state-action pair, and thus the value function learned by CQL is changed to the same extent at all possible state-action pairs. However, the consistent divergence between the learned policy and the behavior policy over all possible state-action pairs may ultimately lead to a decline in the performance of the learned policy and even result in failure in imbalanced datasets. We illustrate these challenges by providing a motivating example and practical experiences in the next session. § CURRENT OFFLINE RL METHODS FAIL IN THE IMBALANCED DATASET An imbalanced dataset over the state space in offline RL can be described as the state visitation of the behavior policy d_β(s) in the dataset imbalanced. The state visitation of the behavior policy d_β(s) follows a power law distribution <cit.> with exponent η∈ [0, ∞]. The probability distribution for a random variable X is: p(X=x)=1/Z·1/x^η, where Z is a normalizing constant term and η determines the degree of imbalance in the imbalanced offline RL dataset. A higher value of η exacerbates the imbalance in the state space and renders learning from rare experiences more challenging. Furthermore, rare experiences are of importance as they are likely, or in many cases, derived from expert policies. The significant decline in performance observed in current offline RL algorithms can mainly be attributed to two factors: State-agnostic. Most current offline RL algorithms are state-agnostic and enforce consistent pessimism for all state-action pairs in the datasets. In the context of an imbalanced dataset, it is crucial for the learned policy to remain similar to the behavior policy, β, in states with insufficient coverage d_β^-(s), as these states typically consist of expert demonstrations. Conversely, in states with sufficient coverage, it is necessary for the learned policy to deviate from β in order to stitch potential useful trajectories, as these states often consist of a mixture of demonstrations. Uniformly sampling. In deep Q-learning offline RL methods, transitions are uniformly sampled from the dataset, contributing to Bellman updates errors with respect to the probability of transitions in the dataset: ≈𝔼_(s, a, s^') ∼𝒟[(Q_θ(s, a)-𝒯^πQ̅_θ^'(s, a))^2]. During the course of training, those suboptimal transitions from d_β^+(s) dominate over those near-optimal transitions from d_β^-(s). It will cause inefficient sampling to near-optimal but rare transitions from d_β^-(s), which may lead to inferior estimation over Q-functions, especially for tasks with sparse reward requiring enormous exploration. In the following sections, we introduce a navigation task and a practical experience on <cit.> with a varying imbalance to demonstrate how the current state-of-the-art offline RL algorithm, i.e. CQL, fails in imbalanced datasets. We then theoretically reveal the failure of current offline RL algorithms under imbalanced datasets. Experiments details can be found in <Ref>. §.§ State-Agnostic Pessimism We first use a didactic navigation task in <Ref> to present the failure of CQL with state-agnostic pessimism. This task exemplifies the challenge of exploration with sparse reward and long horizon. The goal of this task is to find a path from the initial state (yellow grid in the first room) to the target state (red grid in the last room) in the grid space, where each room is only connected by one grid with its adjacent rooms. The state and action space are discrete, where the state presents the current location of the agent via the (x, y) coordinate and the action space is 4 corresponding to four orthogonal nearby grids. The agent will receive 10 rewards for reaching the goal state and 0 for all other actions at any feasible state. The dataset is generated by a goal-reaching controller from the start to the goal state and the agent executes the correct action with increasing probabilities from 10 % probability to 100 %. The dataset is characterized by an imbalanced distribution of state coverage across the entire state space. Specifically, there is a high degree of coverage in the first room, which is accompanied by suboptimal actions, whereas there is insufficient coverage in the last room, where near-optimal policies are present. The color bar is limited to 30 to better highlight the differences among rare experiences in <Ref> (left). It should be noted that the samples of some states (e.g., those in the first room) reaching the limit are significantly larger than 30 with a high likelihood. We illustrate the action choices of two policies trained with CQL, one with low pessimism (α=5) and the other with high pessimism (α=20), over the entire feasible state space in <Ref> (center) and (right), respectively. One feasible state without actions represents that Q-functions are the same in the whole action space, which can be attributed to the missing data in the original dataset. The grid color (more saturated to yellow means a higher value) denotes the value function, which is obtained by taking the expectation of all possible actions. Given a low level of pessimism (i.e., α=5), shown in <Ref> (center), the learned policy is entitled to deviate heavily from the behavior policy, encouraging the agent to stitch suboptimal trajectories and hence successfully get out of the first room. However, low pessimism leads to a poorer estimation of Q-functions in states with insufficient coverage and the failure of the last room. This can be attributed to the high distribution mismatch induced by low pessimism, where insufficient coverage serves as an amplifier to the inaccurate estimation of Q-functions. On the other hand, in order to succeed in the last room, high pessimism is required to significantly constrain the learned policy to the behavior policy. However, the learned policy fails in the first room (suboptimal trajectories) because of the high similarity to the behavior policy, as shown in <Ref> (right). In order to finish this task, the policy should be imposed state-specific pessimism. In detail, imposing low pessimism for states with sufficient coverage (e.g., states from the first room) and high pessimism for states with insufficient coverage (e.g., states from the last room). Note that lower pessimism is highly risky to out-of-distribution states problem at the test time, resulting in incorrect Q-functions over those states because of the missing in the training dataset. §.§ Uniformly Sampling To illustrate the failure of current offline RL algorithms, we then train CQL on the imbalanced dataset in AntMaze medium, which is a more sophisticated task with continuous space over states and actions. Similar to dataset, the expert trajectories reaching the target location in AntmMze datasets are rare and the state coverage is imbalanced. The imbalance in the dataset is increased with higher η and the performance significantly decreases (shown in <Ref> (left)). Except for the state-agnostic training process, we hypothesize that uniformly sampling also worsens the performance because of inefficient training on rare expert experiences from d_β^-(s), failing to properly estimation of Q-functions. To validate our hypothesis, we track 500 state-action pairs near the target location (i.e., d_β^-(s)) and the start location (i.e., d_β^+(s)), respectively. In <Ref> (center), we graph the mean of TD errors over and find that TD errors of states from d_β^-(s) are larger than states from d_β^+(s) around 5. Can we draw techniques from common approaches to the long-tail distribution problem in supervised learning? Prioritized Experience Replay (PER, as proposed in <cit.>) is a method that modifies the sampling probability of transitions based on their temporal difference (TD) errors. Specifically, it over-samples transitions with high TD errors, potentially adjusting the non-uniformity of the dataset. However, as demonstrated in <Ref> (right), re-weighting the loss in <Ref> based on TD errors can result in a decrease in overall performance. We posit that the re-sampling process alters the distribution of d_β(s), which dynamically shifts the behavior policy β with respect to TD errors, and results in an additional divergence between the true β and the modified β. As a result, the extra distributional shift problem is introduced to the whole training process and yields further incorrect estimations over Q-functions. On the other hand, those techniques are useful in online one-step Q-learning RL methods because the learning policy can collect new experiences and thus avoids the distributional shift problem. §.§ Theoretical Analysis So far we have empirically seen the continuous performance drop with the increasing imbalance of the imbalanced dataset, now we formally characterize why current pessimism without the consideration of dataset coverage fails to guarantee policy improvement in the imbalanced dataset. In contrast to previous common approaches analysis via concentrability coefficient <cit.>, upper bounding the ratio of state-action visitation a specific policy d_π(s, a) and the behavior policy from the dataset d_β(s), i.e., max _s, a d_π(s, a) / d_β(s) ≤ C^π, we use a different metric from the variant of differential concentrability <cit.>, aiming to measure the imbalance of dataset coverage (i.e., the discrepancy of state visitation frequency between adequate and inadequate dataset coverage over the whole state space). Given a divergence D over the action space, the differential concentrability C_diff ^π of a given policy π with respect to the behavior policy β is given by: s_1 ∼ d_β(s)^+ s_1 ∼ d_β(s)^-𝔼[(√(D(π, β)(s_1)/d_β(s_1))-√(D(π, β)(s_2)/d_β(s_2)))^2] <Ref> measures of imbalance in the dataset in the current state-agnostic methods, is characterized by the discrepancy between a given policy π(a|s) and the behavior policy β(a|s), weighted negatively by the density of states in the denominator, from states with sufficient and insufficient coverage (i.e., d_β^+(s) and d_β^-(s)), respectively. Instead of considering a simpler scenario where d_β(s) = Unif(s) (i.e., d_β(s_1)=d_β(s_2)) <cit.>, we consider a more realistic scenario where d_β(s_1)>d_β(s_2), and the imbalance of the dataset is exacerbated by increasing the value of η in <Ref>. This allows us to examine the effect of imbalanced datasets on the performance of our method. Under the imbalanced dataset, if the policy divergence 𝒟 with respect to any given policy π is state-agnostic, whether loose or tight constraints, C_diff ^π would be large because the denominators are significantly disagreeable (i.e., d_β(s_1) is large than d_β(s_2)). However, on the other hand, if we allow 𝒟 to be state-specific with respect to the state coverage (i.e., loose constraints to states from d_β^+(s) and tight to states from d_β^-(s)), such that D(π, β)(s_1) is large and D(π, β)(s_2) is small. C_diff^π would be relatively low, as the numerator, the policy divergence D(π, β)(s), increases monotonically in relation to the denominator, the coverage of the dataset d_β(s). Small C_diff ^π indicates that the policy π stays close to the behavior policy β in d_β^-(s) while deviates significantly in d_β^+(s), consisting with our didactic example and conclusion in <Ref>. Given the definition of differential concentrability under the imbalanced state coverage, we follow <cit.> and use it to bound the policy improvement over π w.r.t. β in the safe policy improvement framework <cit.>. We then show that large C_diff ^π is hard to guarantee the policy improvement with a large margin, under the distributional constraints methods in <Ref>: W.h.p. ≥ 1-δ, for any prescribed level of safety ζ, the maximum possible policy improvement over choices of α, J(π)-J(β) ≤ζ^+, where ζ^+ is given by: ζ^+:=max _α h^*(α) ·1/(1-γ)^2 s.t. c_1/(1-γ)^2√(C_diff ^π_α)/|𝒟|-α/1-γ𝔼_s ∼ d_π(s) [D(π, β)(s)] ≤ζ where h^* is a monotonically decreasing function of α, and h(0)=𝒪(1). <Ref> expresses the essential trade-off between distributional constraints and safe policy improvement. The LHS of the constraint in <Ref> is a monotonically increasing function of C_diff ^π. When the value of C_diff ^π is low, a relatively small α can lead to a significant improvement in performance, as measured by h^*(α) in <Ref>, while also ensuring compliance with the ζ-safety condition. However, when the dataset is exposed to imbalance (i.e., C_diff^π is large) in the current state-agnostic constrains framework, satisfying the ζ-safety condition requires a larger α, which obtains smaller maximum possible improvement h^*(α). When the degree of imbalance η in <Ref> approaches extremes, the improvement h^*(α) over the learned policy π is close to zero with high likelihood, i.e., the learned policy fails to improve over the behavior policy β. We remark that smaller C_diff ^π is essential for the ζ-safety condition and hence a smaller α for the improvement h^*(α) with a large margin. The proofs of <Ref> can be found in <cit.>. § RETRIEVAL AUGMENTED OFFLINE RL As discussed in <Ref> with empirical and theoretical analyses, current state-of-the-art offline RL algorithms can hardly solve imbalanced offline RL datasets. In Natural Language Processing (NLP) and computer vision (CV) communities, there is a line of works <cit.> that leverages external information or knowledge to alleviate the heavy-tail or long-tail problems. Actually, our designed imbalanced dataset in offline RL is similar to these studies, which are following Zipf's law. As a preliminary study, we try to introduce non-parametric retrieval-augmented methods to enhance standard offline RL algorithms by combining retrieved relevant states from external resources. Auxiliary dataset preparation. Firstly, we prepare external resources as our auxiliary dataset 𝒟_aux for the retrieval process. The auxiliary dataset contains a large body of knowledge or experience, for example, trajectories from previous experiences of the current learned policy or other agents <cit.>. In our setups, we directly adopt the state information in the same and relevant environments as auxiliary dataset 𝒟_aux. Retrieval process. Secondly, we retrieve the nearest states with useful information from auxiliary dataset 𝒟_aux. Formally, given the query state s_ori^i∈𝒟 and the external state s_aux^i∈𝒟_aux from the auxiliary dataset, we encode all these states to representations as static indexes. For high dimensional states, the representations of both query state and external states can be denoted as Enc(s_ori^i) and Enc(s_aux^i). To mitigate additional computation burden and eliminate effects from other architecture, we evaluate our method on Mujoco <cit.> tasks with low dimensional states. Then, after building the indexes, we compute the similarity score between these indexes based on Maximum Inner Product Search (MIPS) or other distance functions, such as Euclidean distance. Taking the dense inner product as an example, the detailed similarity score can be computed as follows: Sim(s_ori^i, s_aux^i) = exp(s_ori^i· s_aux^i)/∑_j=1^|𝒟_aux|exp(s_ori^i· s_aux^j) According to the similarity scores Sim(s_ori^i, s_aux^i) between the query state s_ori^i and external states s_aux^i, our retrieval process selects the top-k nearest retrieved states {s_ret^1, s_ret^2, ⋯, s_ret^k} to augment our agent, which will further introduce in the next part. Following <cit.>, we use k=10 and adopt the architecture <cit.> as the implementation to efficiently complete the approximate vector similarity search. Retrieval-augmented agent. Finally, we hope to leverage these retrieved states {s_ret^1, s_ret^2, ⋯, s_ret^k} for our agent to alleviate the heavy-tail problem reflected in our exploration-challenging dataset. With query state s_ori^i, our agent further exploits the retrieved states by a straightforward and no-parametric approach. Considering that directly concatenating all retrieved states to the query state will lose the main information of the query state, we concatenate the original state s_ori^i with averaged retrieved states to produce the final state s_final: s_final=[s_ori^i⊕∑^k_j=1s_ret^j/N] Where ⊕ is the concatenate operator. We then feed the final state s_final to the policy network and value network, then further adopt the popular value-based offline RL method, e.g., CQL, to train and evaluate. It is worth noting that the only difference between the offline RL method and our retrieval-augmented offline RL method is that we introduce additional state information via s_final instead of the only state from the current transition s_ori^i. We summarize the pseudo-code of RB-CQL in <Ref>. § EXPERIMENTS Imbalanced datasets. In order to evaluate the effectiveness of Retrieval-based CQL (RB-CQL), we first establish four new tasks from two tasks with imbalanced datasets in the variant of the standard benchmark D4RL <cit.>. Recall in <Ref>, the distribution of imbalanced datasets follows Zipf's law, and the magnitude of its coverage is negatively related to the degree of policies (i.e., sufficient to insufficient coverage is characterized by mixture and expert policies). For AntMaze tasks, we use a goal-reaching controller with a fixed start distribution (same as the evaluation process) and a goal distribution varying in the whole state space, where only rare goal distributions are identical to the evaluation process. It represents that the dataset is marked by imbalanced state coverage, where rare expert trajectories could reach the goal state in terms of evaluation. For Mujoco tasks, we simply combine the random and expert dataset within a few expert datasets. The increasing η in <Ref> denotes the increasing imbalance over those datasets: for AntMaze navigation tasks in medium and large mazes, smaller near-optimal trajectories are corresponding from to dataset; for Mujoco locomotion tasks, smaller near-optimal trajectories are corresponding to random dataset ratio from 95% to 99%. Note that AntMaze tasks are more challenging than Mujoco locomotion tasks due to the sparse reward setting and the heavy requirement for exploration. We then compare RB-CQL with TD3+BC <cit.> and CQL <cit.>, which are two state-of-the-art methods and enforce policy support constraints. For a fair comparison, we sweep important hyperparameters of each algorithm. The experimental results for these tasks are reported in <Ref>. The shaded area in the plots represents the standard deviation of the results. Detail implementation and experimental details are provided in <Ref> and the performance profiles <cit.> based on score distribution are shown in <Ref>. For each AntMaze task with increasing imbalance (i.e., to ), our approach, RB-CQL, is robust to the imbalance and maintains its performance to a certain degree, especially from to without any performance drop. However, the performance of other methods is vulnerable to the imbalance in the dataset, where they have a sharp decrease (e.g., CQL decreases from 60 to 20, and TD+BC decreases from 20 to 0 in AntMaze-medium). The performance gap between RB-CQL and CQL grows with the increasing imbalance (i.e., from to ), which illustrates the effectiveness of the retrieval augmentation on CQL. All algorithms, including RB-CQL, fail in the extremely hard AntMaze task , shown in <Ref>. In Mujoco locomotion tasks, known for easier tasks compared to AntMaze, RB-CQL outperforms other methods with 95% and 97% random dataset ratio. Similarly, in the heavily imbalanced dataset of the Hard+ AntMaze task, there was limited improvement observed with the use of the RB-CQL method compared to other methods. We also visualized what we retrieved from our retrieval process, shown in <Ref>. From the locomotion perspective, the posture is not exactly the same between the query state and retrieved states, but they are similar from the macroscopic point of view. It boosts the generalization ability by the augmentation from the auxiliary dataset. § RELATED WORK Offline RL <cit.> is interested in the problem of extracting a policy from a fixed dataset, without further interacting with the environment. It provides the promise to many real-world applications where data collection is expensive or dangerous such as healthcare <cit.>, autonomous driving <cit.>, industrial control <cit.>, and task-oriented dialog systems <cit.>. The task of offline RL, learning a policy to perform better than the behavior policy in the dataset, often leads to the distributional shift between the learned policy and the behavior policy <cit.>, which can cause erroneous extrapolation error. Offline RL without dataset consideration. To alleviate the above problem, previous methods enforce pessimism in actor updates <cit.> or critic updates <cit.>, whose learned policy would query the Q-functions of out-of-distribution actions to outperform the behavior policy, implicitly or explicitly minimizing the divergence over those two policies to mitigate the distributional shift problem. Another branch of methods learns an optimal policy by using only in-sample transitions, also called in-sample learning, bypassing querying the values of unseen actions. This can be achieved by vanilla advantage-weighted regression <cit.>, the extreme approximation of advantage-weighted regression <cit.>, conditioning on some future information or target information <cit.>, or find the best next states in datasets <cit.>. Those methods have achieved significant success in standard benchmarks <cit.>, however, they have not adequately considered the properties of the dataset, despite it being the primary source to extract policies. In this study, we shift our focus towards offline RL for real-world applications by taking into account the distribution of real-world datasets (i.e., imbalanced datasets), and examining the limitations of current algorithms in such scenarios. Offline RL with dataset consideration. The prevalent use of benchmarks <cit.> in current research has led to a neglect of the importance of the properties of the dataset in the development of models. Previous benchmarks have often assumed that datasets are not imbalanced, with rare but important trajectories, despite the fact that studies such as <cit.> provides dataset from real-world scenarios and recognizes that the exploration challenge is a fundamental challenge in offline RL. Several prior works have conducted extensive empirical analyses to establish various characteristics <cit.> regarding the impact of dataset properties on performance. Despite these efforts, previous studies have primarily focused on standard benchmarks, without taking into account real-world datasets and have not adequately addressed the practical challenges posed by imbalanced datasets. While policies fail to adequately learn under such multi-task or lifelong learning settings with imbalanced datasets from different environments <cit.>, the single-task setting has been hardly investigated. Instead of allowing the agent to further actively interact with the environment to collect task-specific or task-agnostic datasets to overcome the imbalance or exploration challenge <cit.>, we are interested in the setting that additional interaction is forbidden but other large-scale offline datasets are available. Retrieval in RL. Retrieval-augmented methods have recently been widely used to leverage a large body of external information to support downstream tasks in natural language processing <cit.> or computer vision fields <cit.>. Recently, the RL community introduces the retrieval system to integrate experience with new information in a straightforward way to advise the current decision: mitigate the expensive computation and overcome the capacity limitation of models <cit.>. With the same purpose of utilizing related experiences to facilitate decisions, <cit.> use a frozen pre-trained language transformer to embed history information and improve sample efficiency in the partially observable Markov decision process. The methods mostly similar to ours, RB-CQL, are <cit.>, where they both augment a standard reinforcement learning agent with a retrieval process. However, the motivation is different. Our algorithm is designed to overcome the difficulties of the imbalanced datasets in the offline RL setting, while they are aiming to reduce the computation burden and integrate the previous experience into the model without additional gradient updates. Besides, the retrieval architecture of those two algorithms is parametric, which needs other computation sources, but ours is non-parametric. § CONCLUSIONS AND FUTURE WORK In this work, we investigate the challenges posed by imbalanced datasets in offline reinforcement learning, which mirror the distribution commonly found in real-world scenarios. We first characterize the imbalanced offline RL dataset with an imbalanced state coverage, varying policies ranging from a mixture of behaviors to expert ones from sufficient to poor state coverage. Empirically and theoretically, we show that the current offline RL method, CQL, is flawed in extracting high-quality policies under such datasets. To overcome the challenges in imbalanced datasets, We further propose a new method, retrieval-based CQL (RB-CQL), by the augmentation of CQL with a retrieval process, which directly incorporates related experiences from a larger source dataset into training. We evaluate our method on a range of tasks with varying levels of imbalance using the variant of D4RL and show that our method is robust to the imbalance and outperforms other baselines. Limitations of our work are that the retrieval process requires huge computation sources from the CPU and the lack of study on high dimensional inputs, which requires the embedding networks, to exclude other influences. We hope this work will inspire further research in the field of offline RL, particularly utilizing practical and imbalanced datasets, and serve as a foundation for large-scale offline RL applications. icml2021 Offline Reinforcement Learning with Imbalanced Datasets: Supplementary Material Offline Reinforcement Learning with Imbalanced Datasets: Supplementary Material § PSEUDO CODE § ADDITIONAL EXPERIENCES § EXPERIMENT DETAILS The variant of D4RL AntMaze datasets. The dataset we introduced in this work is all imbalanced with the heavy-tail property, featuring imbalanced coverage and rare but important trajectories. In AntMaze tasks, we introduce four new AntMaze datasets for the medium and large mazes from D4RL <cit.>. For each type of maze, the imbalance increases from to , characterizing an optimal decreasing trajectory that can reach the target state distribution in evaluation. In detail, the start state distribution is fixed and aligned with the evaluation, whereas the goal state distribution can be varying and misaligned with the evaluation. In each episode, the goal state distribution can hardly be aligned with the goal state distribution in evaluation. Given an observation, a goal state from the environment, and optimal action from a pre-trained optimal goal-reaching policy, the agent executes the correct action given by the optimal policy. That simulates the imbalanced dataset we have introduced in our work, where the state coverage is heavy-tail and the rare trajectories near the target state are optimal in terms of evaluation. On the other hand, on Mujoco locomotion tasks, we simply combine the random and expert datasets with large ratios approaching 100% (i.e., 95%, 97%, and 99%). It also stimulates the imbalanced dataset we have described because only expert policies can reach certain states and thus gain significant performance, while random policies reach those states with low likelihood. The total transitions in every dataset are 1M. To give a more intuitive representation of the imbalance over the whole state space, we calculate the number of transitions that are in the start distribution grid, a medium grid at the center of the maze, and the target goal state distribution grid, shown in <Ref>. We also present the total transitions and expert transitions in imbalanced Mujoco locomotion tasks, shown in <Ref>. For MuJoCo locomotion tasks, we average mean returns over 10 evaluations every 5000 training steps. For AntMaze tasks, we average over 100 evaluations every 0.1M training steps. They are both taken from 5 random seeds. For the source code of our experiences, we re-run TD+BC from author-provided source code https://github.com/sfujim/TD3_BChttps://github.com/sfujim/TD3_BC. Due to the introduction of new datasets and for a fair comparison, we sweep the hyper-parameter α in TD3+BC in [0.1, 0.2, 0.5, 0.7, 1] and choose the best performance in our results. For CQL and our method, we use the open-source JAX <cit.> version code: https://github.com/young-geng/JaxCQLhttps://github.com/young-geng/JaxCQL. Same as we do in TD3+BC, we sweep the hyper-parameter α in CQL in [5, 10, 20, 50] and choose the best performance in our results. Note that RB-CQL and CQL use identical hyper-parameters in all imbalanced datasets. The details of important hyper-parameters α of TD3+BC and CQL (RB-CQL) are shown in <Ref>. and other hyper-parameters of TD3+BC and CQL (RB-CQL) are shown in <Ref> and <Ref>, where we note that the hyper-parameters in CQL (RB-CQL) with AntMaze tasks are suggested by the open-source code from https://github.com/young-geng/CQLhttps://github.com/young-geng/CQL.
http://arxiv.org/abs/2307.01601v1
20230704094030
Prototypes as Explanation for Time Series Anomaly Detection
[ "Bin Li", "Carsten Jentsch", "Emmanuel Müller" ]
cs.LG
[ "cs.LG" ]
[email protected] 0000-0002-9707-4596 TU Dortmund University Dortmund Germany [email protected] TU Dortmund University Dortmund Germany [email protected] 0000-0002-5409-6875 TU Dortmund University Dortmund Germany Detecting abnormal patterns that deviate from a certain regular repeating pattern in time series is essential in many big data applications. However, the lack of labels, the dynamic nature of time series data, and unforeseeable abnormal behaviors make the detection process challenging. Despite the success of recent deep anomaly detection approaches, the mystical mechanisms in such black-box models have become a new challenge in safety-critical applications. The lack of model transparency and prediction reliability hinders further breakthroughs in such domains. This paper proposes ProtoAD, using prototypes as the example-based explanation for the state of regular patterns during anomaly detection. Without significant impact on the detection performance, prototypes shed light on the deep black-box models and provide intuitive understanding for domain experts and stakeholders. We extend the widely used prototype learning in classification problems into anomaly detection. By visualizing both the latent space and input space prototypes, we intuitively demonstrate how regular data are modeled and why specific patterns are considered abnormal. <ccs2012> <concept> <concept_id>10010147.10010257.10010258.10010260.10010229</concept_id> <concept_desc>Computing methodologies Anomaly detection</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Anomaly detection Prototypes as Explanation for Time Series Anomaly Detection Emmanuel Müller =========================================================== § INTRODUCTION Anomaly detection in time series data is gaining traction in several current big data research areas. The vast development of data processing and analysis facilitates real-time data monitoring applications in different branches, e.g., health care, manufacturing, astronomy <cit.>. Anomaly detection in time series data is an important application area in those scenarios. Unlike traditional outlier detection in the database system, detecting single abnormal points or even whole abnormal time periods in time series data is much more challenging due to its serial dependent and dynamic nature. On the one hand, time series data is usually collected from sensor networks in real-time, leaving less time for domain experts to generate labels for the model training. On the other hand, anomalies often only appear on specific dimensions (subspace anomalies) and specific temporal contexts (contextual anomalies), making detecting and interpreting such anomalies even harder. In recent years, deep reconstruction-based approaches have been developed for time series anomaly detection tasks, for example, the autoencoders <cit.>. An autoencoder is a symmetric neural network trained to reconstruct the regular[To prevent ambiguity, normal data in the anomaly detection context is referred to as regular data in this paper.] data and is supposed to produce significantly larger reconstruction errors by unseen anomalies. Specifically for time series data, recurrent neural networks (RNNs) are used to construct the autoencoders to capture the temporal information in the input sequences. Thanks to their model capability, deep autoencoders have shown convincing performance in detecting high dimensional and temporal anomalies from time series <cit.>. Despite the performance advantage, autoencoders also suffer the criticisms of other deep neural networks - the lack of transparency. Though the detected anomalies fit well with standard evaluation criteria, their reliability is not well-proven. The lack of human interpretable information makes it hard to tell why an anomaly is abnormal, especially in high-dimensional time series data or long input sequences. In safety-crucial applications, for instance, a machine learning model assists in detecting abnormal arrhythmia in patient ECG data. Without a verifiable interpretation, it might lead to catastrophic consequences by automatic treatment. Therefore, interpretability is an essential demand of such anomaly detection applications. To address the black-box issue, various interpretable machine learning approaches recently gained attention in the fields that require high transparency and human-understandability <cit.>. We propose to use example-based prototypes as an intuitive and explainable solution for interpreting anomalies in time series data. Prototypes are widely used for case-based reasoning in computer vision <cit.>, graph learning <cit.> and sequential data learning <cit.>. Embedding a prototype layer is one of the widely used approaches to learn prototypes from neural networks in an end-to-end fashion. With the build-in prototype layer in the neural networks, prototype-based models are commonly efficient to train without requiring extra investigation in the interpretability functionalities. Moreover, the prototypes are usually self-contained and straightforward to understand, e.g., representative animal faces, sentences, and sensor data patterns. However, prototypes are still understudied in the anomaly detection field. In this paper, we propose to use prototypes to interpret the regular data during anomaly detection using autoencoders. In this context, data showing a certain repeating regular pattern is generated by several latent distributions, while the anomaly is any data point or period that deviates from from this regular pattern. We model the regular patterns of the time series data with prototypes in the latent space of the autoencoder and learn multiple prototypes to discover the latent components of the regular data distribution. Anomaly patterns that lie distantly from the data generated during the regular pattern state can be explained by comparing them with the constructed prototypes. Moreover, we further explain the anomaly patterns that lie distantly from regular data in the latent space by comparing them with the prototypes. To our knowledge, this is the first prototype-based explanation application in the time series anomaly detection domain. Our contribution to the paper can be summarized as follows: * we propose ProtoAD, an end-to-end LSTM-Autoencoder for anomaly detection with prototype learning * we develop latent space prototype-based explanations for the understanding of the regular state of the studied data * we evaluate our method with synthetic and real-world time series data. Moreover, we visually demonstrate examples of prototypes to show the benefit of our model qualitatively § RELATED WORKS This section briefly reviews the existing work in autoencoder-based anomaly detection models and prototype-based explanations methods. §.§ Reconstruction-based anomaly detection Autoencoders have been used as an unsupervised anomaly detection approach for years. Feed-forward autoencoders <cit.> and Variational Autoencoders <cit.> are used for time-independent data. In contrast, RNN-based autoencoders <cit.> show their strength in detecting contextual anomalies in time series data. Based on the reconstruction error, a standard approach for estimating anomaly likelihood is to assume the reconstruction error following a normal distribution and measure the Mahalanobis distance between the reconstruction error of unknown data and the estimated distribution <cit.>. In addition to reconstruction error, the hidden representation in the latent space can also be used for likelihood estimation <cit.>. Gaussian Mixture Model (GMM) <cit.> and energy-based model <cit.> are also used for the likelihood estimation. Common thresholding techniques over the anomaly likelihood are based on maximizing the performance on a validation set, which requires labels in advance <cit.>. Other approaches, including the hierarchical temporal memory (HTM) <cit.> and temporal convolutional network (TCN) <cit.> are also adopted in time series anomaly detection concerning different use cases and data properties. However, they are not directly relevant to the reconstruction-based models. §.§ Explanation with prototypes Due to the complex properties of both feature and time dimensions of time series data, prototypes are considered an intuitive explanation. Common prototype learning approaches for neural networks follow a three-step paradigm. 1) Representation learning, 2) prototype learning in the latent space, and 3) class prediction. The objective commonly includes 1) minimizing classification error, 2) minimizing the distances between each hidden representation and one of the prototypes, and 3) maximizing the distances between prototypes. In the existing prototype learning literature, <cit.> employs a multi-layer convolutional neural network to construct the autoencoder, which learns hidden representations for image data. They rely on the decoder to project the learned prototypes in the human-understandable space, sometimes producing unrealistic reconstructions. Using a single encoder to replace the autoencoder is considered as a reduction of training effort in <cit.>, and they use the nearest neighborhood of each prototype in the latent space as the corresponding realistic patterns in the input space. Under different problem settings, <cit.> and <cit.> build up the encoder with convolutional neural networks to encode image data, <cit.> uses RNNs for sequential data, <cit.> use a convolutional layer to learn time series representations, <cit.> employs graph neural networks for the encoder. In our work, we use the single LSTM-Autoencoder for both reconstruction-based time series anomaly detection and hidden space representation learning. The standard objective functions of existing prototype learning approaches consist of multiple regularisation terms that are trained jointly. To ensure the representation ability of the prototypes, <cit.> all minimize the distance between each prototype and every nearby hidden representation as well as every hidden representation to each prototype. Furthermore, the learned prototypes are supposed to be diverse from each other <cit.>. In the objective, we will follow the standard design of the regularization terms above. However, different from most existing works, which use cross-entropy for their classification task to minimize the classification error <cit.>, in our unsupervised setting, we use the reconstruction-error to regularize the reconstruction process of regular data. Besides prototypes, other techniques are also used for explaining time series data. The representative subsequences Shapelets <cit.> can be similarly used for explanation. Instead of finding the representative pattern as prototypes, counterfactuals <cit.> explain the instance towards the opposite class. Recently, the attention mechanism is also used for explaining time series data <cit.>. § PRELIMINARIES §.§ Terminology Let X={X_t}_t∈ℤ be a d-dimensional time series process that shows a regularly repeating pattern over time periods of some length L. These repeating patterns are contained in sliding windows W_t={X_t+1,…,X_t+L} of L consecutive elements of the time series. Often, the window size L can be selected based on prior knowledge on the dataset, which is known to show seasonality over e.g., one day or one week. Anomalies in time series data are commonly divided into three categories <cit.>, point anomaly, contextual anomaly, and collective anomaly. In this work, we consider point anomalies (e.g., abrupt peaks in the data window) and contextual anomalies (e.g., the appearance of one or more points makes a temporal context unusual). Formally, we assume that the data points are generated by a periodically stationary time series process X with periodicity L <cit.>. That is, the time series consists of regularly repeating patterns of length L which evolve over time without distributional changes, i.e., we do not consider concept drifts <cit.>. Let (W_t, y_t)_t∈ℤ be the dataset after applying the sliding window and y_i∈{0,1} is the label of the window W_t (0 for regular data and 1 for anomaly). The anomaly detection is conducted on the window level. A window is considered abnormal if at least one point or sub-window with multiple points that shows a significantly different behavior to the window during the regular pattern state. The significance is determined by comparing the window anomaly score predicted by the model and a user-defined threshold. §.§ Problem definition Given the multi-dimensional time series data with applied sliding window, the target is to train an autoencoder-based end-to-end anomaly detector that * detect anomaly windows in an unsupervised manner * deliver representative prototypes of regular data in the latent space * leverage interpretation of anomalies based on the prototypes of regular data § METHODOLOGY In this section, we propose ProtoAD, an LSTM-Autoencoder with an additional prototype layer, which can be trained end-to-end in an unsupervised manner. §.§ ProtoAD architecture The architecture of ProtoAD is in line with the existing prototype neural networks <cit.>. We use an LSTM-Autoencoder to learn time series hidden representations in the latent space and feed the representations to the prototype layer for similarity-based prototype comparison. Specifically, we designed the architecture and training procedure for unsupervised anomaly detection, while only data consisting of regularly repeating patterns is used for the training and prototype learning. An overview of the ProtoAD architecture is shown in <ref>. We construct the LSTM-Autoencoder in the fashion of <cit.>. More specifically, the d dimensional input window W_t={X_t+1^t+L} is feed into the encoder fℝ^L× d→ℝ^m The last hidden state of the encoder LSTM unit h_i=f(W_t) (h_i∈ℝ^m) is used as the hidden representation of the input window in latent space. A same-structured decoder gℝ^m→ℝ^L× d target at reconstructing the window from the hidden representation W_t^'={X_t+1^'t+L}. The decoder LSTM unit takes h_i as the initial hidden state while takes the real data from previous timestamp as input. We train the autoencoder to minimize the reconstruction error of regular windows, i.e., no anomaly data will be used during training. The reconstruction error at timestamp t is defined as e_t=|X_t-X_t^'| The training set is used to estimate a normal distribution 𝒩(μ,Σ) (𝒩(μ,σ) for univariate data) of the reconstruction error for multivariate input data. And the likelihood of a data point being abnormal is defined by the anomaly score a_t = 1/σ√(2π)e^ - ( e_t - μ)^2 / . -2σ ^2 d=1 (e_t-μ)^TΣ^-1(e_t-μ) d>1 The largest anomaly score is picked up to represented the window anomaly score a_t+1^t+L=i=1,...,Lmax(a_t+i) In our work, we do not specify a threshold over the window anomaly scores to get a binary prediction. Instead, we directly evaluate the AUC score based on the real-valued anomaly scores. Different existing thresholding techniques can be applied to get a binary prediction in such a situation <cit.>. Based on the anomaly detection model above, we introduce a prototype layer between the encoder and decoder, which leverages interpretable prototypes of the regular data during the end-to-end training process. The prototype layer does not influence the information flow from the encoder to the decoder, i.e., the only information the decoder gets from the encoder is the last encoder hidden state. The prototypes are learned by regularizing the objective function. The prototype layer contains k prototypes p_i∈ℝ^m (i=1...k) to be learned, where k is a user-defined parameter and k vectors are randomly initialized within the range [-1,1]. As introduced in <ref>, several regularization terms are employed in the objective function to get the expected prototypes. In most existing prototype-based models for classification task <cit.>, the prototype layer is followed by some linear layers and a Softmax layer for the production of prediction, which increases the complexity of the network and requires additional regularization to enforce interpretability. As an anomaly detection model, our outputs are derived directly from the autoencoder reconstruction errors. Therefore, we omit the other common output layers after the prototype layer to simplify the network structure. §.§ Objective function The objective in the training phase is to train the autoencoder with regular windows such that the reconstruction error is minimized and to learn a batch of prototypes from the regular data. The reconstruction error loss of the autoencoder is given by ℒ_e=1/n∑_i=1^n∑_l=1^Le_t+l where n is the number of sliding windows. To ensure the learned prototypes are informative and diverse enough to each other, we use the diversity loss designed by <cit.>, ℒ_d=∑_i=1^k∑_j=i+1^kmax(0, d_min-||p_i-p_j||_2^2)^2 which defines the threshold d_min that only apply this penalize to nearby prototype pairs. Finally, to ensure the prototypes are representative to the local hidden representations, we define the following representation regularization term ℒ_r=1/k∑_j=1^ki∈[1,n]min||p_j-h_i||^2 + 1/n∑_i=1^nj∈[1,k]min||h_i-p_j||^2 The first term ensure that each prototype is close to at least one hidden representation, which the second term ensures each hidden representation has one prototype to be represented. The overall objective function is ℒ=λ_eℒ_e+λ_dℒ_d+λ_rℒ_r where λ_e, λ_d and λ_r are weighting hyperparameters. § EXPERIMENTS In this section, we introduce the experiments on ProtoAD under different settings. We experiment over different real-world datasets with a variety of anomaly types. In addition, to evaluate the model performance on specific data characteristics, we also introduce a synthetic dataset with artificial anomalies. Finally, we demonstrate the prototypes visually and analyze the prototype properties w.r.t. a variety of parameter settings. §.§ Experiment setup §.§.§ Datasets We experiment on one synthetic dataset and four common real-world benchmark datasets in the time series anomaly detection domain. The dataset properties are summarised in <ref>. To understand the anomaly detection process and the learned prototypes, we introduce a one-dimensional synthetic dataset sampled from a sine wave with amplitude 1 and period 100 timestamps. A random noise ϵ∈[0, 0.1] is added to every timestamp. In addition, we add a random factor α∈[0,1] every 100 timestamps in the test set to simulate point anomalies. We define a half period of the sine wave as the window length (i.e., L=50), such that the model is supposed to learn the crests and troughs as two types of prototypes. The New York City Taxi (Taxi) dataset is a one-dimensional real-world dataset with a clear periodical feature. It recorded the passenger counts over days in 2014. Extreme passenger counts on public holidays are considered anomalies. Following <cit.> we aggregate the count numbers into 30-minute intervals. We take one day (i.e., L=48) as the window length. SMAP (Soil Moisture Active Passive satellite) and MSL (Mars Science Laboratory rover) are two multivariate telemetry datasets from NASA <cit.>. The datasets contain both point and contextual anomalies. Domain experts labeled the test sets. However, there are also anomaly data in the training sets. The polluted training set can impact the purity of prototypes. There is no common repeating pattern in the datasets. We set the window length L as 100 for both datasets. SMD (Server Machine Dataset) <cit.> is a multivariate dataset collected from servers of an Internet company. The dataset is already divided into two equal-sized training and test sets by the provider and labeled anomalies by domain experts based on incident reports. We only use the data from one machine (machine-1-1) in our experiments. We set L=100 for SMD. §.§.§ Evaluation metrics We adopt the AUC score as the evaluation metric. Considering the essential requirement of detecting both point and contextual anomalies, we only evaluate on the window level. A data window is abnormal if it contains one or multiple abnormal instance(s). §.§.§ Competitors To the best of our knowledge, this is the first work that engages time series anomaly detection and prototype learning. The existing prototype learning networks <cit.> commonly work in a supervised manner, which requires labeled data for the training phase. Therefore they are not directly relevant to our setting. We mainly compare our method with the unsupervised anomaly detection approaches. Firstly, we compare with the EncDecAD <cit.>, which has a similar setting as ours, but without the prototype layer. Thereby, we can determine whether the prototype learning damages the original reconstruction-based anomaly detection. Furthermore, we compare with one of the state-of-the-art unsupervised time series anomaly detection OmniAnomaly <cit.>. We follow most of the default hyperparameter settings in <cit.> but use the window length same as in our work for the sliding windowing. §.§.§ Experimental details In all experiments, we set λ_e=0.025, λ_d=0.2 and λ_r=0.5. During training, 25% of the data is used for learning the parameters μ and Σ (σ for univariate data). All models are trained for 100 epochs with batch size 20, learning rate 0.0001, dropout rate 0.2. We use the Adam optimizer <cit.>. All experiments are conducted on a NVIDIA Quadro RTX 6000 24GB GPU. The experimental results are averaged over three runs. §.§ Evaluation results §.§.§ Anomaly detection performance Firstly we report the AUC score over different models in <ref>. For ProtoAD, we take the number of prototypes k=10. There is no significant difference between EncDecAD and ProtoAD, which indicates that the additional prototype layer and corresponding learning process do not directly impact the anomaly detection performance. ProtoAD even benefits from the prototype learning in the Synthetic and Taxi datasets. OmniAnomaly shows worse AUC scores in comparison with the other two models. Different from <cit.>, where all possible thresholds over the predicted anomaly scores are traversed, and only the threshold with the best F1 score is reported, the AUC score reflects more general quality of the anomaly scores over multiple thresholds. §.§.§ Parameter sensitivity The hidden layer size m and the number of prototypes k are two major hyperparameters in ProtoAD. In this section, we examine the performance sensitivity to those two parameters. For each dataset, we try m∈[10, 50, 100, 200, 400, 600, 800] and k∈[0, 5, 10, 20, 30, 50], where k=0 reduces ProtoAD to EncDecAD. Heatmaps of the AUC scores on real-world datasets are shown in <ref>. The results indicate that the model is sensitive to neither hidden size nor the number of prototypes, as far as the hidden size is large enough to capture all information in the data windows. However, the number of prototype k should not be set too large. Otherwise, the model tends to learn redundant prototypes (see <ref>). §.§.§ Latent space visualization We investigate a visualization of the autoencoder hidden space in this section to understand how time series data windows are embedded and how prototypes of regular data are learned. We use UMAP <cit.> to reduce the high-dimensional latent representations into two dimensions. The result is visualized in <ref>. Here, we set k=5 for all datasets. The prototypes shown in the plots are learned during the training phase. The plotted regular and anomaly points are embedded from the test data. In the synthetic data, the regular data lie in two regions. Four prototypes are learned from the trough half (lower left) and one from the crest half (upper right). Since the anomalies are always generated at the beginning of the crest half, the anomalies lie nearer to the crest regular data embedding. In the real-world datasets, especially the SMAP and MSL with polluted training data, regular and abnormal data do not clearly show separated clusters, though the learned prototypes represent the major blocks of dense regions showing regular patterns. Specifically, the prototypes gather at the bottom right corner for SMD, while no prototype is at the larger upper cluster. A possible reason is that the high-dimensional server data contain many zero values. The model can not summarize informative patterns in the training set, and slightly different regular patterns in the test data are mapped into a different region. §.§.§ Prototype-based explanation Finally, we map the prototypes learned in the latent space back to the human-interpretable input space. Similar to <cit.>, we map the prototypes back to the input space using the nearest training data embedding in the latent space to prevent unrealistic produced by the decoder. Moreover, each neighbor can only be used once, so every prototype is unique. We visualize the prototypes learned in the one-dimensional datasets Taxi and Synthetic in <ref> with five prototypes (P1 to P5) for each dataset. In <ref> and <ref>, four similar prototypes (P1, P2, P4, P5) show a increasing taxi usage pattern in the morning and turning down at night. P3 can be seen as a delayed version of the other four, which is a weekend pattern. The light lines in the background are the regular (grey) and anomaly (red) sequences with the smallest distance to the corresponding prototypes in the latent space. Most of the regular patterns fit the assigned prototypes. A considerable number of both regular and anomaly sequences have the smallest latent space distance to P3. Some of them visually fit better with other prototypes, while the distance comparison and prototype assignment do not directly take place in the input space but in latent space. However, this is effective for long and high-dimensional sequences. <ref> depicts the explanation of anomaly patterns, namely how different are the anomaly sequences to their nearest prototype. <ref> shows the three assigned regular and anomaly sequences (if available) to each prototype. Since the point anomalies are always generated at the beginning of the crest half, all anomalies are assigned to crest prototype P4. For the high dimensional datasets, we stay observing the prototypes in the latent space and leave the searching for informative sub-input space prototypes as future work. Similarly, we also plan to investigate the reduction of redundant prototypes (e.g., P1, P2, P3, P5 in <ref>). §.§.§ Efficiency comparison Training the autoencoder with an extra prototype layer does not bring much training expense. We compare the epoch training time between EncDecAD (k=0) and ProtoAD (k∈[5, 10, 20, 30, 50]) in <ref>. As shown in the figure, there is no significant increase in training time for ProtoAD. In the contrary, due to the complex model structure, the epoch training time for OmniAnomaly is: Synthetic 32s, Taxi 39s, SMD 225s, MSL 888s and SMAP 3627s. § CONCLUSION AND DISCUSSION In this paper, we explored using prototypes to explain the reconstruction-based anomaly detection process. Specifically, we integrate the recent end-to-end prototype learning into the LSTM-Autoencoder. We use the latent space representations in the autoencoder, which are not directly used in the conventional reconstruction-based anomaly detection models. In our empirical evaluation, we figured out that adding a prototype learning step during the training of the autoencoders does not damage the performance of the autoencoder. The prototypes contribute to an intuitive understanding of the regular pattern state. Although the prototypes learned in the two one-dimensional datasets are realistic and interpretable for humans, there are still two major problems to be solved. Firstly, the selection of parameter k is tricky. Pruning techniques can be applied to reduce the redundancy in the prototypes. Moreover, the prototypes of high-dimensional data can only be shown as regular state patterns. However, it is not intuitive enough for humans to directly figure out a small subset of dimensions of interest. In future work, we plan to investigate learning prototypes in subspaces. ACM-Reference-Format
http://arxiv.org/abs/2307.04804v1
20230706214431
S2vNTM: Semi-supervised vMF Neural Topic Modeling
[ "Weijie Xu", "Jay Desai", "Srinivasan Sengamedu", "Xiaoyu Jiang", "Francis Iannacci" ]
cs.CL
[ "cs.CL", "cs.AI", "68T50" ]
Finding the Dynamics of an Integrable Quantum Many-Body System via Machine Learning W. A. Coish August 1, 2023 =================================================================================== Language model based methods are powerful techniques for text classification. However, the models have several shortcomings. (1) It is difficult to integrate human knowledge such as keywords. (2) It needs a lot of resources to train the models. (3) It relied on large text data to pretrain. In this paper, we propose Semi-Supervised vMF Neural Topic Modeling (S2vNTM) to overcome these difficulties. S2vNTM takes a few seed keywords as input for topics. S2vNTM leverages the pattern of keywords to identify potential topics, as well as optimize the quality of topics' keywords sets. Across a variety of datasets, S2vNTM outperforms existing semi-supervised topic modeling methods in classification accuracy with limited keywords provided. S2vNTM is at least twice as fast as baselines. § INTRODUCTION Language Model (LM) pre-training <cit.> has proven to be useful in learning universal language representations. Recent language models such as  <cit.> have achieved amazing results in text classification. Most of these methods need enough high-quality labels to train. To make LM based methods work well when limited labels are available, few shot learning methods such as <cit.> have been proposed. However, these methods rely on large pre-trained texts and can be biased to apply to a different environment. Topic modeling methods generate topics based on the pattern of words. To be specific, unsupervised topic modeling methods  <cit.> discover the abstract topics that occur in a collection of documents. Recently developed neural topic modeling achieves faster inference in integrating topic modeling methods with deep neural networks and uncovers semantic relationship  <cit.>. Compared to unsupervised topic modeling methods, semi-supervised topic modeling methods <cit.> allow the model to match the provided patterns from users such as keywords. However, these methods do not have high topic classification accuracy. After studying topic modeling methods in real world applications <cit.>, we realize the scenario that cannot be solved by current methods. The scenario involves topic exploration: users have identified a subset of topic keywords. They want to capture topics based on these keywords, while explore additional topics. They value the quality of the resulting topics and want to identify new topics while refining the topics' keywords iteratively <cit.>. In addition, users want to use the topic they created on topic classification. In this work, we propose semi-supervised vMF neural topic modeling (S2vNTM). S2vNTM takes the desired number of topics as well as keywords/key phrases for some subsets of topics as input. It incorporates this information as guideline and leverages negative sampling to create topics that match the pattern of selected keywords. It creates additional topics which align with the semantic structure of the documents. It can help users remove redundant topics. Figure <ref> illustrates how users interact with our model. The advantages of this method include: 1. It consistently achieves the best topic classification performance on different datasets compared to similar methods. 2. S2vNTM only requires a few seed keywords per topic, and this makes it suitable for data scarce settings. It does not require any transfer learning. 3. S2vNTM is explainable and easy to fine-tune which makes it suitable for interfacing with subject-matter experts and low resource settings. In sections below, we have shown Method in Section <ref> which describes the technical details of S2vNTM, Results in Section <ref> and Conclusion and Future work in Section <ref>. Details on Modularity of S2vNTM is given in Appendix <ref>. Related Work and Challenges are described in Appendix <ref>, Experiments in Appendix <ref> and Ablation Studies in Appendix <ref>. § METHOD Figure <ref> shows the overall architecture of S2vNTM. The encoder is based on a Neural Topic Model leveraging von Mises-Fisher distribution. We use von Mises-Fisher distribution because it captures distributions on unit sphere and induces better clustering properties. To improve clustering, we add temperature function to the latent distribution(See details in Appendix <ref>). The decoder tries to reconstruct the input from the topics while leveraging user-provided seeds for the topics. The model is trained end-to-end with the objective of minimizing reconstruction error while conforming to user-provided seeds and minimizing topic overlap. §.§ vNTM We first introduce notation: the encoder network, ϕ, encodes the bag of words representation of any document X_d and outputs the parameters which can be used to sample the topic distribution t_d. The decoder is represented by a vocabulary embedding matrix e_W and a topic embedding matrix e_t. We use a spherical word embedding  <cit.> trained on the dataset where we apply the model to create e_W and keep it fixed during the training. Spherical word embedding performs better on word similarity related tasks. If we do not keep embedding fixed, reconstruction loss will make the embeddings of co-occurred words closer which is not aligned with true word similarity. Fewer parameters to train can also make our method more stable. W represents all selected vocabularies and T contains all topics. In this notation, our algorithm can be described as follows: for every document d, (1) input bag of word representation X_d to encoder ϕ. (2) Using ϕ, output direction parameter μ and variation parameter κ for vMF distribution.<cit.> (3) Based on μ and κ, generate a topic distribution t_d using temperature function. (4) Reconstruct X_d by t_d×(e_t e_W^T). The goal of this model is to maximize the marginal likelihood of the documents: ∑_d = 1^Dlog p(X_d | e_t, e_W). To make it tractable, the loss function combines reconstruction loss with KL divergence as below: L_Recon = ( -E_q_ϕ(t_d|X_d)[log p_θ (X_d|t_d)] L_KL = KL[q_ϕ(t_d|X_d) || p(t_d)] ) Our spherical word embedding is trained on the dataset without any pretraining. This can help embeddings deal with domain specific word. This can also make our model work for the language where there is not much text data available to pre-train. We leverage the vMF distribution as our latent distribution because of its clusterability and stability <cit.>. Because of the design of the decoder, for each topic, it can be represented as a distribution of all words in vocabulary ((e_t e_W^T)). When a document is provided, the user can identify the topics distribution of documents and also related keywords that contribute to these topics. Thus, the model is explainable. §.§ Loss Function Our method allows users to define an arbitrary number of topics and provide keywords for some subsets of those topics. The model takes these two parameters as inputs and generates topics that include user's keywords as well as additional topics that align with topic distribution. With that being said, we want the prior loss similar to L_CE = -∑_s ∈ S max_t ∈ Tlog∏_x ∈ s q(x|t) where S contains all keywords groups, s is a group of keywords and T is the group of topics, q(x|t) stands for the probability of word x given t calculated by decoder. q(x|t) = exp(e_t_j e_x_i^T)/∑_x ∈ Xexp(e_t_j e_x^T) This is the j-th row and i-th column of decoder embedding matrix (e_T e_W^T). Thus, it uses existed neural network structure to calculate and makes it computationally efficient. §.§ Topic and Keywords set Matching We want to make sure matched topics capture all documents related to the provided keywords. The problem of using L_CE is that different keywords set may map to the same topic. It may merge the irrelevant topic set when that topic set is not aligned with most of the topics. To avoid this situation, we first select the topic that is most likely to align with this group of keywords but not align with words in all other groups. To be specific, we first select t_s = _t∈ T(E_x ∈ s (log q(x|t)) - max_x ∈ S log(q(x|t)) ) This is inspired by Gumbel-Softmax <cit.>. If one word in keywords set is dissimilar to the topic, the log will penalize it heavily and the topic is less likely to be matched. We also want to separate keyword groups which are different. If a keyword in another group has a higher probability in a topic, then max_s ∈ Slog(q(s|t)) will be large, which makes the topic less likely to be the selected topic. If we have two similar keywords' sets, they can have similar and large E_x ∈ s (log q(x|t)). These keywords sets can still map to the same topics. The benefit of this matching method is that it is more stable compare to method such as Gumbel-softmax and it can remove redundant topics by merging it with similar topics. §.§ Negative Sampling We also want keywords as guidance to select other related keywords. Similar to  <cit.>, when a keyword set is matched with a topic, we want the topic to be less correlated with words that are unrelated to the matched keyword set. Thus, we leverage negative sampling. We first select the top N words in the selected topic using a decoder embedding matrix and sample each of top N word with sampling probability equal to max_x ∈ s 1 - cos(x, x_N) where x_N stands for a word in top N words in that selected topic and cos stands for cosine similarity. Our goal is to make words that are dissimilar to the provided keywords likely to be sampled, as seen in Table <ref>. Negative Sampling can also help the model converge faster since it pushes away unrelated words quicker  <cit.>. The penalty we add for each keywords' set is: L_NS, s = γ∑_x ∈ ns (log (q(x | t_s))) where ns contains words sampled from negative sampling. The loss of negative sampling is L_NS = ∑_s ∈ S L_NS, s β controls input keywords strength on overall loss function and γ controls the strength of negative sampling. The overall loss function is: L = L_Recon + L_KL + β * L_CE + γ * L_NS where L_NS is the sum of all keywords set. L_Recon is the reconstruction loss and L_KL is the KL divergence loss. The benefit of this negative sampling design is that q(x | t_s) can be directly mapped from the decoder. Thus, it does not require additional computation, which saves computation resources. § RESULTS We ran our experiments 10 times with different seeds and show the result in Table <ref> (and Figure <ref> in the Appendix). (1) S2vNTM achieves the best accuracy in all three datasets. In fact, the worst reported accuracy of S2vNTM is higher than the best from the other two methods. We believe there are 3 reasons contributing to its superior performance. (i) It has high clusterability using vMF as a latent distribution. This makes our method easily clustered. (ii) Negative sampling excludes unrelated keywords from the topics. This makes our method perform better on documents that are related to keywords. (iii) S2vNTM also uses word embedding trained on the dataset. This makes our method perform well on documents that have words that are similar to words in keywords set. (2) S2vNTM keywords make more sense qualitatively in Table <ref> in Appendix. This is due to KL divergence loss. Flexible concentration parameter κ makes our method more locally concentrated. This makes topics different from each other. (3) S2vNTM also has a higher aucroc and Macro F1 score than other methods in most cases (from Table <ref>). This means that our method can deal with imbalanced datasets and can easily distinguish between classes. However, it performs less well on R8, which has 8 imbalanced classes. For class with less than 300 documents, keywords selected by tf-idf are less representative. Thus, it has lower performance and higher variance. Besides, our method using vMF distribution which has higher reconstruction loss when the dimension is high. R8 has 8 classes which make our method perform worse. Qualitatively, as you can see in Table <ref> , negative sampling reduces the importance of unrelated keywords such as call, york, company while increasing the importance of given keywords such as military, industry, athlete. Also, semantically, keywords in each set are closer to each other. For example, in the first set of keywords, government, war are semantically more related to crime, rule compared to call, election. On the other hand, even if CorEx has good topic diversity, the keywords set is not coherent. For example, the last group in Table <ref> has inc, corp, people, bush, million in one group. Determining the relationship between these keywords is not obvious. Speed We run each model 10 times on AG News with different seeds to evaluate how long it takes to fine-tune the model by modifying 20 percent of keywords set. The average fine-tune time for our method is 51.33 seconds. To compare, CatE <cit.> takes 888.61 seconds to fine-tune, while CorEx takes 94.98 seconds to fine-tune. This shows that our method is better suitable for iterative topic learning <cit.> and resource restrictive environments. Overall, qualitative results show that S2vNTM can help users find more coherent and relevant keywords compare to existed methods. Negative sampling makes the topics set more coherent. S2vNTM is at least twice faster than baselines. § CONCLUSION AND FUTURE WORK In conclusion, we propose S2vNTM as an approach to integrate keywords as pattern to current neural topic modeling methods. It is based on vMF distribution, negative sampling, modified topic keywords mapping and spherical word embeddings. Our method achieves better classification performance compared to existing semi-supervised topic modeling methods. It is not sensitive to parameters. S2vNTM gives more coherent topics qualitatively. It also performs well when the input keywords set is less common in the dataset. It is also fast to fine-tune. It does not require pretraining or transfer learning. It only needs a few sets of seed words as input. The ablation study shows the potential of our method to further improve. In the future, we will focus on decreasing the gap between loss function and classification metric, incorporating sequential information and further improving the stability of the model. We will also work on improving its expressability in higher dimensions. acl_natbib Appendix § MODULARITY OF S2VNTM Our methods can be plugged into variational autoencoder based topic modeling methods such as NVDM <cit.> and NSTM <cit.>. For NVDM, since their decoder is a multinomial logistic regression, we can consider that as the distribution of word over the topic. For L_CE, we can change P(e_w|e_t) to P_θ (x_i|h) (formula (6) in <cit.>) as it also represents the probability of certain word given all other words. For L_NS, we just sample it the same way as <cit.>. For NSTM, since they also maintain topics and word embeddings (They name it G and E in the paper), we can use cosine similarity of these embeddings to create the loss functions L_CE and L_NS respectively. For that being said, this work can be easily extended by existed unsupervised neural topic modeling methods. §.§ Temperature function Step (3) in Section <ref> introduced the concept of a temperature function. Temperature is a function that applies to the sample generated by vMF distribution to form a topic distribution. To be specific, t_d = (τ_temp(η_d)) where η_d is the vector of sampled vMF distribution. Since the sample from vMF is on the surface of a sphere, we have ∑ (η_d^2) = 1 In cases where the number of topics equals to 10, the most polarized η_d is (1, 0, 0, 0, 0, ...). If we apply softmax to this η_d, the highest topic proportion is 0.23, making latent space entangled and limit the clusterability. To overcome the expressibility concern mentioned in Related Work in Appendix <ref>, temperature function τ_temp is used to increase expressibility. For example, if we let τ_temp(η_d) = 10 * η_d, the highest topic proportion of the above example becomes 0.99. This makes the produced topics more clustered. Also, we make κ flexible. The KL divergence of vMF distribution makes the distribution more concentrated while not influence the direction of latent distribution. § RELATED WORK AND CHALLENGES In this section, we touch on key concepts utilized in S2vNTM and their limitations. §.§ Weakly-supervised text classification Weakly supervised text classification methods aim to predict labels of texts using limited or noisy labels. Given class names, <cit.> first estimates class representations by adding the most similar word to each class. It then obtains document representation by averaging contextualized word representations. Finally, it picks the most confident cdocuments from each cluster to train a text classifier. <cit.> improves weakly text classification on existed LM using contrastive regularization and confidence based reweighting. <cit.> associates semantically related words with the label names. It then finds category-indicative words and trains the model to predict their implied categories. Finally, it generalizes the model via self-training. However, all these methods are time consuming to train and fine-tune which make it hard to be interactive. It is also hard to explain the reason behind certain classification. §.§ Topic Modeling Latent Dirichlet Allocation (LDA) <cit.> is the most fundamental topic modeling approach based on Bayesian inference on Markov chain Monte Carlo (MCMC) and variational inference; however, it is hard to be expressive or capture large vocabularies. It is time consuming to train the model. It also has the tendency to identify obvious and superficial aspects of a corpus <cit.> Neural topic model <cit.>(NTM) leverages an autoencoder <cit.> framework to approximate intractable distributions over latent variables which makes the training faster. To increase semantic relationship with topics, Embedded topic model (ETM) <cit.> uses it during the decoder/reconstruction process to make topic more coherent and reduces the influence of stop words. However, the generated topics are not well clustered. Besides, using pre-trained embeddings cannot help the model identify domain specific topics. For example, topics related to Covid cannot be identified easily using pre-trained Glove embeddings <cit.> since Covid is not in the embeddings. To improve clusterability<cit.>, NSTM <cit.> uses optimal transport to replace KL divergence to improve clusterability. It learns the topic distribution of a document by directly minimizing its optimal transport distance to the document’s word distributions. Importantly, the cost matrix of the optimal transport distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Due to the instability of latent distribution, it makes it difficult to integrate external knowledge into these models. Existed semi-supervised NTM methods either are not stable <cit.> or need specific twists <cit.>. §.§ Semi-supervised Topic Modeling Semi-supervised Topic Modeling methods take few keyword sets as input and create topics based on these keyword sets. Correlation Explanation (CorEx) <cit.> is an information theoretic approach to learn latent topics over documents. It searches for topics that are "maximally informative" about a set of documents. To be specific, the topic is defined as group of words and trained to minimize total correlation or multivariate mutual information of documents conditioned on topics. CorEx also accepts keywords by add a regularization term for maximizing total correlation between that group of keywords to a given topics. There is a trade-off between total correlation between documents conditioned on topics and total correlation between keywords to topics. GuidedLDA <cit.> incorporates keywords by combining two techniques. The first one defines topics as a mixture of a seed topic and a regular topic where topic distribution only generates words from a group of keywords. The second one associates each group of keywords with a Multinomial distribution over the regular topics. It transfers the keywords information from words into the documents that contain them by first sampling a seed set and then using its group-topic distribution as prior to draw the document-topic distribution. However, both methods fail to capture the semantic relationship between words. This means that when the provided keywords are less frequent in the corpus, the model's performance drop sharply. §.§ Negative Sampling Negative Sampling <cit.> is proposed as a simplified version of noise contrastive estimation <cit.>. It is an efficient way to compute the partition function of an non-normalized distribution to accelerate the training of word2vec. <cit.> sets the negative sampling distribution proportional to the 3/4 power of degree by tuning the parameters. Uncertainty based negative sampling <cit.> selects the most informative negative pairs and iteratively updates how informative those pairs are. Some methods <cit.> also account for the intra-class correlation. Negative sampling is used in topic modeling methods since it can leverage the word-context semantic relationships <cit.> or generate more diverse topics <cit.>. Both methods are applied in fully unsupervised scenario. In general, it needs to compute the similarity between the topic and all vocabularies. This step adds additional time and space complexity to the model which makes related methods less practical. §.§ von Mises-Fisher based methods In low dimensions, the gaussian density presents a concentrated probability mass around the origin. This is problematic when the data is partitioned into multiple clusters. An ideal prior should be non informative and uniform over the parameter space. Thus, the von Mises-Fisher(vMF) is used in VAE. vMF is a distribution on the (M-1)-dimensional sphere in R^M, parameterized by μ∈ R^M where ||μ|| = 1 and a concentration parameter κ∈ R_≥ 0. The probability density function of the vMF distribution for t ∈ R^D is defined as: q(t|μ, κ) = C_M(κ) exp(κμ^Tt) C_M(κ) = κ^M/2 - 1/(2π)^M/2 I_M/2 - 1(κ) + log 2 where I_v denotes the modified Bessel function of the first kind at order v. The KL divergence with vMF(., 0) <cit.> is KL(vMF(μ, κ)|vMF(.,0)) = κI_M/2(κ)/I_M/2-1(κ) + (M/2 - 1) logκ - M/2log (2π) - log I_M/2-1(κ) + M/2logπ + log 2 + logΓ(M/2) vMF based VAE has better clusterability of data points especially in low dimensions <cit.>. <cit.> proposes using vMF(.,0) in place of Gaussian as p(Z), avoiding entanglement in the center. They also approximate the posterior q_ϕ(Z|X) = vMF(Z;μ,κ) where κ is fixed to avoid posterior collapse. The above approach does not work well for two reasons. First of all, fixing κ causes KL divergence to be constant which reduces the regularization effect and increases the variance of latent distribution. Another concern with vMF distribution is its limited expressability when its sample is translated into a probability vector. Due to the unit constraint, softmax of any sample of vMF will not result in high probability on any topic even under strong direction μ. For example, when topic dimension M equals to 10, the highest topic proportion of a certain topic is 0.23. § EXPERIMENTS In this section, we report experimental results for S2vNTM and show that it performs significant better compared to two baselines. Datasets: We use three datasets: DBLP <cit.>, AG News <cit.>, R8 <cit.>. These datasets are all labeled. AG News has 4 classes and 30000 documents per class with an average of 45 words per document. We select AG News since it is a standard dataset for semi-supervised topic modeling evaluation. DBLP has 4 classes. Documents per class varies from 4763 to 20890. Average document length is 5.4. We select DBLP to see how our model performs when document is short and categories are unbalanced. R8 is a subset of the Reuters 21578 dataset, which consists of 7674 documents from 8 different reviews groups. We select R8 dataset to see how our model performs when the number of keywords set and topics are large. We use the same keywords as <cit.> for our experiments for AG News. For others, we use 20 percent of corpus as the training set to get our keywords by tf-idf score for each classes. To form the vocabulary, we keep all words that appear more than 15 times depending on the size of the dataset. We remove documents that are less than 2 words. We also remove stop words, digits, time and symbols from vocabulary. We also include bigram and trigram that appear more than 15 times. Settings: The hyperparameter setting used for all baseline models and vNTM are similar to <cit.>. We use a fully-connected neural network with two hidden layers of [256, 64] unit and ReLU as the activation function followed by a dropout layer (rate = 0.5). We use Adam <cit.> as optimizer with learning rate 0.002 and use batch size 256. We use <cit.> as scheduler and use learning rate 0.01 for maximally iterations equal to 50. We use 50 dimension embeddings <cit.> trained on the dataset where we apply the model. We set the number of topics equal to the number of classes plus one. Our code is written in pytorch and all the models are trained on AWS using ml.p2.8xlarge (NVIDIA K80). We use 80 percent data as test set. Baselines: We compare our methods with GuidedLDA <cit.> and CorEx <cit.>. CorEx are finetuned by anchor strength from 1 to 7 with step equal to 1 on the training set. GuidedLDA is finetuned using best seed confidence from 0 to 1 with step equal to 0.05 on the training set. Metrics: To evaluate the classification performance of these models, we report Accuracy, Macro F1 and AUC. We omit micro f1 since most of classes in these datasets are balanced and micro f1 is very similar to accuracy. In addition, we want keywords in each topic to be diverse. This can help users to explore and identify new topics. We define Topic Diversity to be the percentage of unique words in the top 25 words of all topics <cit.>. Diversity close to 0 indicates redundant topics while diversity close to 1 indicates more varied topics. § QUALITATIVE STUDY 0.9 GuidedLDA CorEx iraq, kill, reuters, president, minister government, war, military, iraq, kill reuters, stock, oil, price, profit stock, market, industry, price, oil microsoft, company, software, service, internet software, computer, microsoft, internet, service win, game, team, season, lead footable, basketball, game, win, season space, reuters, win, quot, world court, executive, chief, commission, union quot, year, company, million, plan inc, corp, people, bush, million § ABLATION STUDIES In this section we analyze and investigate the effect of various techniques and hyperparameters on S2vNTM. We use AG News as the dataset since it is standard and has balanced classes. We run each experiment 10 times and report the barplot. Specifically, for parameters we analyze: 1. Number of topics 2. Different keyword sets 3. Temperature function 4. γ (L_NS multiplier) for topic modeling. For techniques, we analyze 1. Batch normalization and dropout and 2. Learnable distribution temperature. The first two are reported here and the rest are discussed in the Appendix. §.§ Effect of Number of topics In this section, we analyze the effect of increasing in the number of topics from 5 to 13 shown in Figure <ref>. We see that the accuracy drops as the number of topics increases. This is because with increased number of topics, there is an increase in probability of adding additional topics that are similar to anchored topics and so the model gets confused while assigning words to topics. This could be either because of lack of topics in dataset or the latent space becoming very crowded i.e. space between vectors is less so it becomes difficult for models to discriminate between topics. Also, it seems that vMF based variational autoencoder performs less well in high dimension data. This could be addressed with an increase in distribution temperature discussed in Appendix <ref>. §.§ Effect of different keywords sets: For traditional method such as CorEx and GuidedLDA, their performance drops when less frequent words are selected as keywords. To check the performance of our method on the less frequent keywords, we select top 30 keywords based on tf-idf score. Then we sort them based on frequency. The keywords set is shown in Figure <ref>. We then check its performance. See Figure <ref>. As you can see, the classification metric does not change in most of cases. This means our method is robust to keywords change. This is because we leverage semantic information using word embedding trained on the dataset. And negative sampling helps our model identify words semantically related keywords. This helps our method leverage more information beyond bag of word representations. This experiment shows that our method can perform well when the input keywords is less common in corpus. This section continues the ablation studies reportede in the main paper. §.§ Effect of Increase in temperature Temperature is the constant multiplied to the sampled distribution from vMF before softmax. Because of this trick, the topic distribution become more representative and therefore it becomes easier for the model to identify those topics or clusters. Now if the temperature is too high, the distance between topic clusters will increase and the model will have difficulty in adjusting clusters based on keywords set since keywords may be far from each other in latent space. This can be observed in Figure <ref> when the temperature is increased from 20 and above we see a decrease in accuracy. On the contrary, if the temperature is too small, the latent distribution is less representative which makes the boundary between clusters vague. This again decreases the performance of the model. This can be observed in Figure <ref> from values 5 to 15. The benefits of lower temperature is to make topic more diverse as you can see in second Figure <ref>. The optimal value for temperature given number of topics (=5 for this experiment) and the dataset is anywhere between 15-20 where model can easily identify topics. The temperature within this range has high clusterability and expressibility. §.§ Effect of Gamma We explore the effect of various values of gamma shown in Figure <ref>. With the increase in gamma, we observe a minimal increase in standard deviation and mean in accuracy and macro. Higher gamma makes L_NS stronger which makes the model less stable. We observe a strong increase in diversity score. This is because higher gamma score can push unrelated keywords further away. This makes each topic more coherent and different from other topics. So, at higher gamma, there is significant increase in diversity with negligible sacrifice in accuracy. This indicates stability of the method. §.§ Effect of batch normalization and dropout We explore various combinations of batch normalization (bn) and dropout which are shown in Figure <ref>. Independently, S2vNTM_Drop0.5 i.e. S2vNTM with dropout 0.5 has high standard deviation. Reducing dropout to 0.2 S2vNTM_Drop0.2 or adding bn S2vNTM_bn_Drop0.5 have very similar effect of reduced variance for accuracy and Macro F1 but S2vNTM_bn_Drop0.5 has higher aucroc and diversity and less variance. In general, adding bn with dropout stabilizes the model performance which was expected. §.§ Effect of learnable distribution temperature In Appendix <ref> we discuss effect of increasing distribution temperature. In this study, we make it a learnable parameter and implement it in two ways. The first way is setting temperature variable as one parameter that can be learned (1-p model). All topics share the same parameter. The second way is setting temperature variable as a vector with dimension equal to the number of topics (n-p model). This means each topic has its own temperature. The initialization value for both the vectors is 10. After training, the 1-p model has value 4.99 and n-p model has values [-0.45,4.88,5.91,3.47,4.19] (values are rounded to 2 decimals). The accuracy for 1-p model is 78.9 and n-p model is 80.5. This means that our method can further improve with learnable temperature. In Appendix <ref> we found that distribution temperature values between 15 to 20 gave highest accuracy (81) but on the contrary the learned values in 1-p is 4.99 (accuracy 78.9). This means that our loss function is not fully aligned with accuracy metric. This is due to the fact that we optimize reconstruction loss as well as KL divergence during the training procedure. This makes our objective less aligned with cross entropy loss.
http://arxiv.org/abs/2307.01242v1
20230703172643
Optimal Control Theory Techniques for Nitrogen Vacancy Ensembles in Single Crystal Diamond
[ "Madelaine S. Z. Liddy", "Troy Borneman", "Peter Sprenger", "David Cory" ]
quant-ph
[ "quant-ph" ]
[ Andrew J. S. Hamilton^1,2 and Tyler McMaken^1,3 August 1, 2023 =================================================== Nitrogen Vacancy Center Ensembles are excellent candidates for quantum sensors due to their vector magnetometry capabilities, deployability at room temperature and simple optical initialization and readout. This work describes the engineering and characterization methods required to control all four Principle Axis Systems (P.A.S.) of NV ensembles in a single crystal diamond without an applied static magnetic field. Circularly polarized microwaves enable arbitrary simultaneous control with spin-locking experiments and collective control using Optimal Control Theory (OCT) in a (100) diamond. These techniques may be further improved and integrated to realize high sensitivity NV-based quantum sensing devices using all four P.A.S. systems. § INTRODUCTION Nitrogen Vacancy (NV) Centers have great potential in the area of quantum sensing. The NV's sensitivity to magnetic fields combined with their ability to be used at room temperature make them excellent test beds for exploring the engineering requirements of quantum sensing <cit.>. Sensing applications with NV centers include imaging small magnetic fields <cit.>, imaging nearby bacteria and molecules <cit.>, sensing DC and AC magnetic fields <cit.>, and sensing crystal strain in the diamond lattice <cit.>. These applications may be enhanced by using ensembles of NV centers that increase the signal to noise ratio by having more active centers in the same focal volume and allow for richer sensor information to be extracted through vector measurements. Control of all four NV orientations present in an ensemble, both sequentially and simultaneously, has been achieved with the use of multiple central microwave frequencies for vector magnetometry and for detecting temperature and magnetic fields simultaneously <cit.>. This paper presents an Optimal Control Theory (OCT) controls-based solution for distinguishable manipulation of all four orientations while maintaining a compact hardware design. OCT has been previously used in NV ensembles to develop pulses robust to the nitrogen hyperfine coupling and inhomogeneities from the microwave field from control striplines; along with several other examples <cit.>. Our OCT solutions are implemented using a single central control frequency and circularly polarized microwave fields, which enables control in zero applied magnetic field over a selected focal volume of NVs taken from the uniformly distributed centers in the diamond. The complex structure of the microwave control field is described by a simple Hamiltonian used to optimize OCT pulses for two key target examples: (100) and (110) diamond. Measurements were explicitly run on a (100) diamond sample to characterize phenomenological Hamiltonian parameters in order to demonstrate orientation-selective spin-locking and a set of OCT pulses that implement identity and transition-selective π operations, respectively, over all NV orientations in the ensemble. § MODELLING THE NV ENSEMBLE §.§ NV Ensemble Structure and Circularly Polarized Microwave Control An NV center is created in a diamond lattice by replacing a carbon atom with a nitrogen and removing an adjacent carbon, <cit.>. The six electrons found within the NV^- center combine to form an effective spin-1 particle <cit.> quantized by a zero field splitting (ZFS) (Δ≈2.87 GHz) <cit.>. Generally, an NV Hamiltonian will also include a transverse ZFS due to crystal strain, E(S_x^2 - S_y^2), an electron Zeeman interaction, γ_eB⃗·S⃗, nuclear Zeeman interactions for both the nitrogen and nearby carbon-13 nuclei, γ_N/CB⃗·I⃗_⃗N⃗/⃗C⃗, nitrogen and carbon hyperfine interactions, A_N/CS⃗·I⃗_⃗N⃗/⃗C⃗, and a nitrogen nuclear quadrupole interaction, QI_Nz^2. However, the experiments in this work are performed in zero applied static magnetic field and in a low strain crystal so, for simplicity, the NV Hamiltonian is reduced to only the axial ZFS term: H_int = Δ S_z^2 Within the single crystal diamond structure, there are four unique possible orientations of the bond between the nitrogen and vacancy, leading to four principal axis systems (P.A.S.) for the NV Hamiltonian (Fig <ref>). Fixing the vacancy to the relative center of the sp_3 hybridized structure, the orientations correspond to the Nitrogen replacing any of the four connecting carbons <cit.>. Control of all four orientations simultaneously is challenging, as a single control field will not have identical action on all orientations <cit.>. The spatial orientation of the microwave field relative to the four P.A.S.s may be chosen to uniquely define either two or four sub-ensembles. For example, if the microwave field is chosen to be along the “xz" lab plane, then it projects onto the (100) diamond to create two effective sub-ensembles: Pair A, given by the degenerate (-1,-1,-1) & (1,-1,-1) P.A.S.s; and Pair B, given by the degenerate (1,1,1) &(-1,1,-1) P.A.S.s. The (100) diamond was chosen for our demonstration experiments because of this symmetry, allowing for convenient collective control and equivalent fluorescence from all orientations. This same field will project onto four unique control Hamiltonians for the (110) diamond, enabling further control application targets. The conventional method for obtaining universal control of the spin-1 NV is to add a static external magnetic field that breaks the degeneracy of the |±1⟩ ground spin states, resulting in unique splittings of the |0⟩↔|+1⟩ and |0⟩↔|-1⟩ transitions for each P.A.S. <cit.>. Alternatively, circularly polarized microwave fields realized with two or more channels with independent amplitude and phase control may be used to obtain transition-selective control to avoid the hardware complexity of adding an external field to the experimental setup, <cit.>. §.§ The Spatial Components of the Microwave Control Field We chose to use two parallel microstrip resonators to create independently controllable microwave fields (figure <ref>). Each microstrip was 7.5 mm long, 127 μm wide, and 17.5 μm tall, with a spacing of 150 μm between them to avoid optical interference while still delivering sufficient microwave power within the focal volume. Figure <ref> shows the resulting microwave fields of the resonator arrangement within the cross section of a 300 μm thick diamond mounted atop the two microstrips. The limit of the working distance (WD limit) of the (100x) optical objective from the top of the diamond is indicated on the figure. The 160 μm working distance results in the focal area being 70 μm away from the microstrips, minimizing the reflected signal of incoming green light off the microstrips. For optimal control efficiency, the orthogonality between the two fields should be maximized, η = 90^∘, shown by solving for η in the control Hamiltonian in equation <ref>. However, even with sub-optimal field orthogonality, OCT pulses may still be found that achieve good control outcomes, as long as there is some non-commutativity of the control fields, <cit.>. The following microwave field components may be substituted into the control Hamiltonian in equation <ref>: w_x1 = μ_o I_1o/2 π L(arctan(x-L/z)-arctan(x/z)+arctan(x-L/z+h)-arctan(x/z+h)) w_y1 = 0 w_z1 = μ_o I_1o/4 π L(ln(z^2+x^2/z^2+(x-L)^2)+ln((z+h)^2+x^2/(z+h)^2+(x-L)^2)) w_x2 = μ_o I_2o/2 π L(arctan(L+w-x/z)-arctan(2L+w-x/z) + … arctan(L+w-x/z+h)-arctan(2L+w-x/z+h)) w_y2 = 0 w_z2 = μ_o I_2o/4 π L(ln(z^2+(L+w-x)^2/z^2+(2L+w-x)^2)+ln((z+h)^2+(L+w-x)^2/(z+h)^2+(2L+w-x)^2)) Where μ_o is the magnetic permeability for free space, I_1/2o is the current running through each microstrip, L is the length of the microstrips, w is the distance between each microstrip and x and z are the location of the center of the focal volume relative to the origin. §.§ P.A.S.-Dependent control Hamiltonian There are three important frames to consider when describing the control Hamiltonian: the lab frame, the diamond crystal frame, and the NV P.A.S.. Expressing all components in a common frame, and then rotating into the frame of the internal Hamiltonian (eq.<ref>) yields the control Hamiltonian in equation <ref>, the full derivation of which may be found in the supplementary information: H_ctrl = A S_x + B S_y' + C S_y + D S_x' A = 1/2(R_xx (I_1w_x1 +I_2w_x2)+ R_yx (I_1w_y1 +I_2w_y2) + R_zx (I_1w_z1+I_2 w_z2)) B = - 1/2(R_xx (Q_1w_x1 +Q_2w_x2)+ R_yx (Q_1w_y1 +Q_2w_y2) + R_zx (Q_1w_z1+Q_2 w_z2)) C = 1/2(R_xy (I_1w_x1 +I_2w_x2)+ R_yy (I_1w_y1 +I_2w_y2) + R_zy (I_1w_z1+I_2 w_z2)) D = - 1/2(R_xy (Q_1w_x1 +Q_2w_x2)+ R_yy (Q_1w_y1 +Q_2w_y2) + R_zy (Q_1w_z1+Q_2 w_z2)) The Hamiltonian is expressed in a letter format to show the division between the four standard spin-1 operators, S_x/y, and “twisted" spin-1 operators, S_x/y^'=i[S_y/x,S_z^2], resulting from moving into the interaction frame of the internal Hamiltonian <cit.>. It is the presence of all four of these operators that allows for access to transition-selective control in the absence of an applied static magnetic field. This Hamiltonian displays a dependence on the geometric relationship of the NV P.A.S.s (R_xx,R_xy,R_zx etc.(SI Fig: <ref>)) unique to each NV, the spatial components of the microwave control field (w_x1,w_x2 etc.), and the four control channels of an AWG (I_1,I_2,Q_1,Q_2) used to create the control signals. § OCT DESIGN OF TRANSITION-SELECTIVE PULSES IN ZERO FIELD Optimal control theory (OCT) is used extensively in quantum control and has recently been proven useful for finding robust control solutions in quantum sensing <cit.>. OCT algorithms generally proceed by optimizing a set of parameterized controls to achieve a desired quantum operation subject to constraints when calculated over a set of Hamiltonians and noise/environment processes. In the most common implementations, the quantum operation is defined to specify either desired state-to-state transitions <cit.> or a full unitary or completely positive trace-preserving (CPTP) map <cit.>. Noise/environment processes and constraints can generally include any process that can be modelled compactly, such as field inhomogeneities <cit.>, limited Rabi frequency <cit.>, and control system distortions <cit.>. In our control situation, the projection of the control fields onto the unique set of NV P.A.S. orientations leads to an incoherent distribution of Hamiltonians that may be treated by optimizing over a direct sum representation of the dynamics <cit.>. The validity of the direct sum representation is helped by the average dipolar coupling between NV centers in the chosen experimental sample being on the order of kilohertz, allowing the control Hamiltonians to be considered independent over each subensemble. Coupling between NV centers may be added for future iterations <cit.>. In our application, OCT pulses are found using the gradient ascent pulse engineering (GRAPE) algorithm implemented in the Quantum Utils package <cit.>. To create a pulse, an initial guess of the pulse shape is made, then updated successively using gradient techniques until the desired map or state-to-state transfer is achieved up to a target performance. The target unitary or state-to-state transfer is given a target fidelity or state overlap (0.99), internal Hamiltonian (Δ S_z^2), set of control Hamiltonians (eq.<ref>) and parameterized controls (I_1,I_2,Q_1,Q_2) as inputs. The total length of the pulse is given by the number of time steps multiplied by the length of each time step. The chosen physical control fields result in two or four control Hamiltonians for the (100) and (110) diamonds, respectively. §.§ Defining Transition-Selective Maps To aid finding and visualizing intuitive solutions, pseudo spin-1/2 operators may be used in place of standard spin-1 operators <cit.>. These are not true spin-1/2 operators, as the |± 1⟩ states share a space with the same |0⟩ state, but conveniently represent the pulse action over states of interest. The pseudo spin-1/2 operators are labelled as S_x/y/z^± for the |0⟩,|+1⟩ and |0⟩,|-1⟩ state pairs, respectively. Expressions for pseudo spin-1/2 operators in terms of spin-1 operators are shown below, with the matrix forms found in equation <ref> of the supplementary information. S_x^± = 1/√(2)(S_x ± i[S_y,S_z^2]) S_y^± = 1/√(2)(S_y ∓ i[S_x,S_z^2]) S_z^± = 1/2 S_z ±1_3 ∓3/2 S_z^2 The pseudo spin-1/2 operators obey the standard commutation relations of the Pauli operators (i.e. [S_x^±,S_y^±]=2iS_z^±), allowing for maps to be constructed in the same fashion as for spin-1/2 particles. As such, the OCT maps may be defined with the general unitary operator in equation <ref>, where (α) is the rotation angle, (n̂) is the unit vector defining the rotation axis, and S^± defines the total list of operators, S^±={Sx^±,Sy^±,Sz^±}. The (+) or (-) operators are chosen depending on which state transitions are desired, the matrix form of the positive operator shown as an example. U^± = 1_3-1^±+cos(α/2)1^±-isin(α/2)(n̂· S^±) U^+ = ( [ 1 0 0; 0 -cos(α/2) + in_zsin(α/2) (in_x+ n_y)sinα/2; 0 (in_x - n_y)sinα/2 -cos(α/2) - in_zsin(α/2) ]) §.§ Transition-Selective OCT Pulse As an example of optimizing a transition-selective OCT pulse, consider a selective π_+/y pulse on the (100) diamond that enacts a π_y rotation in the positive pseudo-subspace and an identity operation in the negative: (α=π, n_x=0, n_y=1, n_z=0). The action of this pulse may be understood through two sets of Bloch spheres, defined over S^±, respectively through <(X/Y/Z)^+/-> = Tr[S_(x/y/z)^+/-.ρ_t] <cit.>. Figure <ref> shows the action of a single OCT pulse on the positive (top) and negative (bottom) Bloch spheres, respectively. The starting state is indicated with a red sphere and final state with a black sphere, while the trajectory of the pulse is shown in grey. Recall that the (100) diamond has only two non-degenerate elements in the incoherent distribution of control Hamiltonians under the chosen microwave field orientation. The trajectory for the (-1,-1,1) & (1,-1,-1) NV sub-ensemble (A) has been shown. The trajectory of the (1,1,1) & (-1,1,-1) NV sub-ensemble (B) is shown in figure <ref>. The mapping of the selective π_+ pulse is clearly seen with the starting states of |0⟩ and |+1⟩ in the positive Bloch spheres, rotating to |+1⟩ and |0⟩, respectively. The shared |0⟩ state may also be observed with the negative Bloch sphere. The correct operation of the intended identity pulse, beginning in the |-1⟩ state is most clearly observed in the negative Bloch sphere. This is further reflected in the positive sphere as the starting and ending state is observed to be at the origin of the positive sphere, indicating no population. §.§ Orientation-Selective OCT Pulse While the (100) diamond is an excellent candidate for demonstrating collective control with two sub-ensembles of NV pairs, the (110) diamond configuration may be used to investigate the full potential of separately controlling all four unique NV orientations within the single crystal. In this case there are four non-degenerate elements in the incoherent distribution of control Hamiltonians under the chosen microwave field orientation. Figure <ref> shows the Bloch sphere representations of the action of a single orientation-selective OCT pulse (Table <ref>) that simultaneously implements a unique operator for each NV orientation. § CHARACTERIZATION OF THE NV ENSEMBLE §.§ The Sample and Experimental System Design For all the experiments performed, a 500 μm thick DNV-B1 (100) diamond from Element Six was used, which contained an estimated 16 000 NV centers within a focal volume of beam diameter 0.59 μm and depth of field 2.51 μm <cit.>. This sample was chosen, as opposed to the ideal 300 μm thick sample, to keep the focal volume well out of range of the microstrips, removing any reflected light from the microstrips. The Gaussian optics at the site of the focal volume are shown in figure <ref> in the supplementary information. Extending beyond the sample, a description of the full optical layout may also be found in the supplementary information. As the experiments were performed at room temperature, the NVs were excited with off-resonant green light, and the red light emission collected with an avalanche photo-diode (APD). §.§ First Calibration Experiment - Equal Fluorescence from All Orientations To remove any bias towards any one orientation, the fluorescence from each orientation first had to be equalized. The orientation-dependent fluorescence from the centers may be controlled by changing the incoming optical polarization <cit.>. Equation <ref> describes how the output intensity of the NV is related to how much the P.A.S. “z"-axis deviates from the propagation axis of the incoming light, (θ), and how much the NVs' “x"-axis deviates from the polarization of the incoming light, (ϕ). The total emission is the sum of the emission from the E_y and E_x excited states, where maximizing E_y minimizes E_x, and vice-versa. E_y is dependent only on the ϕ angle, while E_x is dependent on both ϕ and θ <cit.>: IE_y(ϕ) = sin(ϕ)^2 IE_x(θ, ϕ) = cos(θ)^2cos(ϕ)^2 ITot(θ,ϕ) = sin(ϕ)^2 + cos(θ)^2cos(ϕ)^2 (100) diamond is an experimentally convenient crystal as all NV orientations deviate from the light propagation axis at the same angle (θ), so fluorescence may be optimized by adjusting only the ϕ parameter. Figure <ref> shows the optically detected magnetic resonance-continuous wave (ODMR-CW) spectra of the diamond with a static magnetic field added for convenience to resolve the four orientations into eight peaks. ODMR-CW experiments were performed to monitor relative fluorescence incrementally as a Half Wave Plate (HWP) was rotated, beginning at a relative 0^∘. At 20^∘ and 60^∘, the fluorescence favouring either of the two orientations may be seen. At 0^∘ and 40^∘, a more even emission from each orientation is indicated. The optimal value providing even fluorescence across all orientations was found to be 42^∘. §.§ Determining a Phenomenological Control Hamiltonian Equation <ref> provides a formal description of the influence of the P.A.S., microwave spatial components, and AWG controls on the form of the control Hamiltonian. However, accurately determining the parameters necessary to fully specify the formal Hamiltonian proved difficult. Instead, the OCT experiments performed in this work enlist a phenomenological control Hamiltonian (equation <ref>), that includes experimentally measured parameters. The two microstrips provide the input control amplitudes (Ω_1/2) and global control phase (Δθ), recalling these are simply the polar coordinates of the Cartesian controls shown in the a priori Hamiltonian. The Hamiltonian also contains the measured Rabi drive strength (Ω_NV_A/B) for each of the NV sub-ensembles pair (A) and (B) and the experimental phase value (η) <cit.>. H_A/B = Ω_NV_A/B (Ω_1/2S_x + Ω_2/2cos(Δθ)cos(η)S_x + Ω_2/2cos(Δθ)sin(η)S_y + Ω_2/2sin(Δθ)sin(η)S_x^' + Ω_2/2sin(Δθ)cos(η)S_y^') A Dual Channel Rabi experiment was used to measure the strength of the Rabi drive for each sub-ensemble, Ω_NV_A/B. Figure <ref> shows how Ω_NV_A/B (“y-axis") changes by varying the control phase Δθ. In this case, the phase of the second channel varied θ_2=Δθ (“x-axis") while maintaining a fixed phase of the first channel θ_1=0^∘ and fixed control amplitude Ω_1/2=1 <cit.>. The error bars indicate the full width at half maximum of the gathered spectra. The results show a strong dependence of the measured Rabi frequencies Ω_NV_A/B on the relative phase Δθ as a cos(Δθ)^2 function. The phase response is more dramatic for the high-frequency (-1,-1,-1)&(1,-1,-1) pair (A) (red) than for the low-frequency (1,1,1)&(-1,1,-1) pair (B) (black). To accommodate the relationship between the input control phase and output Ω_NV_A/B, a map associating this behaviour would have to be included in optimization for each input control phase (Δθ). Our chosen alternative approach was to fix the input control phase, resulting in a fixed Ω_NV_A/B, and only optimize the two input amplitudes (Ω_1/2) using OCT. § DEMONSTRATING CONTROL OF NV ENSEMBLES §.§ Demonstrating Orientation Selectivity With a Spin-locking Experiment Preliminary control was demonstrated with a spin-locking experiment that suppresses the evolution of each sub-ensemble selectively <cit.>. Only one microwave channel is required to perform the spin-locking experiment. In this experiment, the control Hamiltonian rotates one sub-ensemble to the "x“-axis, suppressing its evolution while allowing the other sub-ensemble to evolve, the details of which are in the supplementary information (figure <ref>). The results of the spin-locking experiment (black, solid) are plotted over the Rabi spectra (black,dashed) in figure <ref>, with arrows indicating which sub-ensemble was suppressed. There is a clear selective suppression for each of the sub-ensembles in the spin-locking experiment. The ability to suppress the signal of each of the sub-ensembles allows for each to be studied independently, providing a controls-based solution as an alternative to changing the optical polarization to suppress fluorescence from individual orientations. §.§ Experimental Implementation of OCT Expanding beyond the spin-locking experiments, a few proof of concept OCT experiments were performed. The dual channel Rabi experiment yielded Ω_NVA=2.64MHz and Ω_NVB=1.03MHz, with an experimental phase value of η=115^∘, at a fixed global control phase of Δθ=270^∘. These values were used to optimize OCT pulses using amplitude-only controls (Ω_1/2 in equation <ref>). The resulting optimized pulse shape is shown in figure <ref>. Pulses were optimized using a state-to-state target similar to an Identity operation, |0⟩⟨0|→|0⟩⟨0|, and selective π-type operation |0⟩⟨0|→|+1⟩⟨+1|, respectively. The state-to-state transfer could be achieved in 250 steps of 40 ns each, for a total pulse length of 10 μs. In contrast, 800 steps (32 μs long pulse) were necessary to achieve a complete selective π_+ map under the same experimental conditions. The results of the experimental implementation of several different optimized OCT pulses are shown in table <ref> <cit.>. The experimental photon counts for each pulse implementation are normalized to a reference count of the |0⟩ state. In an ideal case, the Identity-like operation would yield a population of 100% |0⟩ and the π-like operation, 0%. The results show a reasonable distinguishability between the behaviour of the two pulse types, with consistent results optimizing over either sub-ensemble (A), (B), or both. The limited Rabi drive strength for each of the NV sub-ensembles required the use of long pulses that were sensitive to experimental unknowns, limiting the contrast between the identity and π pulses. To improve the results, another set of pulses was optimized that accounted for a small variation in the zero field splitting (≈ 100 kHz) while maintaining the same 10 μs pulse length. These robust pulses improved the population results of the identity experiments to 53%, while leaving the π pulse performance unchanged <cit.>. The hyperfine interaction with the nitrogen nuclear spin was also identified as a significant source of error. Simulations of the experimentally implemented pulses that included the nitrogen hyperfine interaction gave performance that more closely matched the experimental results. Robustness to this hyperfine interaction was not included in optimizations as the control amplitude for each of the ensembles (1.03 MHz & 2.64 MHz) was too small to effectively account for the hyperfine interaction (2.16 MHz). § CONCLUSION This paper presented a controls based solution with a generalized control Hamiltonian for understanding the dynamics of an ensemble of NVs within a (100) and (110) single crystal diamond. We demonstrated primitive control of NVs within a (100) diamond using a spin-locking experiment, then using measured values gathered through characterization experiments, collective control for all orientations of NVs with a phenomenological Hamiltonian was demonstrated with distinguishable proof of concept Identity and π_+ experiments using OCT pulses. These proof of concept experiments on the (100) diamond were completed with an arbitrary NV focal volume, minimal distinguishability between the two control fields and one central control frequency. In addition, early optimization with a small static field showed a large improvement in the pulse success, indicating the potential for success in future robustness optimizations. Having an arbitrary focal volume did showcase the ability of the OCT pulses, but having some guidance on the focal volume would improve the non-commutivity between the pulses, extending their capabilities. Using a thinner diamond would allow for the NV focal volume to be closer to the microstrips, increasing the Rabi drive strength and reducing the length of the pulse, allowing for better robustness to RF inhomogeneities and hyperfine splittings with the nitrogen and nearby carbon nuclear spins. Robust OCT pulses can be incorporated into existing chemical sensing schemes to enhance their capabilities. Without altering the experimental setup, a (110) diamond may be used, where four unique orientations are desired. In this case, each orientation may be selectively controlled to either sequentially or simultaneously detect complex magnetic fields as desired by the experiment, further expanding the applications of OCT controlled systems. § DECLARATIONS §.§ Funding This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund (CFREF) and the funding collaboration with NSERC DND. The authors declare to have no financial interests. §.§ Conflict of interest The authors declare that they have no conflict of interest. §.§ Availability of data and materials The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. unsrt § SUPPLEMENTARY INFORMATION §.§ Mapping the Gaussian Beam in the Diamond Sample In addition to the control field at the site of the NV focal volume shown in figure <ref>, the Gaussian sites at the focal volume must be considered, and how the intensity of the beam changes as the beam diverges from the focal volume to the surface of the PCB board, as shown in figure <ref>. There should be a high and uniform intensity at the focal volume, but the intensity should drop off quickly so that there is no background signal picked up from reflecting off the PCB board and microstrips. The inset of figure <ref> shows the intensity of the beam at the focal spot. With a 100x objective, this focal spot is quite small with beam diameter 0.59(4) μm and beam depth of field 2.51(4) μm, containing ≈ 16 000 NV centers. As with all Gaussian beams, the intensity is halved at the Rayleigh range (Z_R) away from the center and 1/e^2 at the beam radius (ω_o) away from the center <cit.>. Expanding beyond this, even at twice each of these values, the intensity of the beam has reduced by -1dB of the maximum intensity at the center of the focal volume. As the intensity dies off very quickly away from the center of the focal spot, only the NVs inside the focal spot are considered as the effective part of the ensemble. The figure does show how the beam diverges well beyond the focal spot, however, due to the Gaussian nature of the beam, the intensity of the beam reduces quickly. Presented in a log scale, it is seen that at the point when the beam hits the PCB board, its intensity is already reduced by -3.5 dB to -5 dB. This further motivates using a thicker diamond (>300 μm) and objective with a shorter working distance (160 μm) to limit how close the focal volume may be to the PCB board, thus reducing the intensity sufficiently to make reflections off the PCB board negligible. To further reduce the background signal from reflections, a knotch and long pass filters are added inline with the emission path. §.§ Block Layout of the Optical Components The block layout of the optical components in the NV ensemble setup is shown in figure <ref>. This setup was designed to be easily adjusted to a single NV confocal setup should the experimental need arise. The block layout contains the laser box, switch arm, mode shaping arm, scanning optics and detection box as the main modules. The laser box contains the source laser generating a beam at 532 nm, The switch arm contains an AOM as the fast optical switch in a double pass switch configuration to increase the ON:OFF contrast. The mode shaping arm corrects for spatial aberrations from the previous two modules and contains the half wave plate which is used to change the incoming optical polarization, used to equalize the fluorescence from the ensemble, shown in figure <ref>. Following the mode shaping arm, the beam is directed to the objective with a dichroic mirror, chosen to reflect green light on the excitation path and transmit red on the emission path. The objective directs the green light to the sample, mounted atop a stage and collects the fluorescent red light emitted from the NVs, sending it to the detection box. The detection box contains a pinhole, a spatial filter for the emitted light followed by an Avalanche Photo Diode (APD), for photon detection. A more detailed account of the optical layout is available in <cit.>. §.§ The Block Layout of the RF Components Figure <ref> shows the block layout for the RF components of the dual microwave system. The numbers shown in the layout are for indexing purposes to indicate the flow of information, and channel association (1 or 2). For each component, the figure indicates the output power [dBm] and any relevant power losses affecting the output power [dB]. Beginning with the frequency synthesizer, a central frequency at 2.87GHz is sent toward a power splitter, dividing the signal into two channels. This central frequency is then mixed with the envelopes sent by the four channel AWG into an IQ mixer. This signal which contains the control amplitude and phase at the central frequency passes through a switch, added for its increased isolation of the ON:OFF, and an amplifier to increase the strength of the signal before reaching the PCB sample board. The switch is placed before the amplifier in this case so as to not exceed the operating power threshold of the switch after the signal is amplified. With this configuration, there are two independent amplitude and phase controls available for experimentation to create the circularly polarized microwave control. For the best success of these dual-channel setups, identical components were used in order to minimize any differences between the two channels. A more detailed account of the complete RF signal and complete list of part numbers and manufacturer may be found in <cit.>. Figure <ref> shows the PCB board layout used in the experiments. There are two input (2 and 4) and two output ports (1 and 3) on the PCB, done to minimize the reflected power back to the amplifier and maximize the amount of power delivered into the diamond sample. A schematic of the diamond sample (4×4 mm) is shown atop the two microstrips with the cross-section of the field diagram shown in figure <ref>. The planar microstrips are 150 μm apart, 127 μm wide, 17.5 μm thick and 7.5 mm long to accommodate diamond samples of larger size. §.§ Spin-locking Simulation The procedure of the spin-locking experiment is shown in figure <ref>, corresponding to the data for the spin-locking experiment shown in figure <ref>. Recalling that the spin-locking experiment is used to suppress the evolution of a chosen sub-ensemble and allow the evolution of another. The figure shows how the low frequency sub-ensemble is suppressed while the high frequency evolves freely. For each stage, the red arrow is the starting state for each of the low and high frequency sub-ensemble, the dashed line indicated the action being performed in each stage while the white arrow is the direction of the control Hamiltonian. It shows the π/2 pulse, the length of which is gathered from the Rabi experiment, to suppress the low-frequency. This same pulse will rotate the high frequency by a full π. Then pulsing the control Hamiltonian on the relative “+x" axis, suppresses the evolution of the low-frequency sub-ensemble while the high-frequency rotates freely. In the last stage, another π/2 is applied which rotates each sub-ensemble back to the “z"-axis so a measurement may take place. The same procedure may be applied to suppress the high-frequency sub-ensemble and allow the low-frequency sub-ensemble to evolve. §.§ Sample OCT pulse The GRAPE algorithm accepts the input control phase Δθ, and measured experimental values Ω_NV_A/B and η, then optimizes over the free control parameters, Ω_1/2 to arrive at the final pulse. The figure shows a sample OCT pulse, optimizing over the control amplitude. Each time step, the amplitude may range between 0 → 1 for optimization. The total length of the pulse (10 μs) is given by the discrete time step value (40ns), and number of steps (250). To mitigate hardware distortions, the pulse should be as smooth a possible, as is shown in the example. §.§ Bloch Trajectory of sub-ensemble pair B Figure <ref> shows the Bloch plot trajectory for the NV sub-ensemble B, the lower frequency sub-ensemble, corresponding to the trajectory of sub-ensemble A shown in figure <ref> in the main text. The same trends are observed as with the trajectory in sub-ensemble A, with a successful π_+ pulse demonstrated for varying input states. A notable difference is the trajectory does appear to be much slower, which is expected as the the NV envelope frequency (Ω_NVB), guiding the pulse amplitude for this case is 1.03MHz compared to the 2.64 MHz of the high-frequency sub-ensemble. §.§ The Complete Generalized Control a priori Hamiltonian Model The phenomenological Hamiltonian provided a convenient experimental model that could be implemented for the proof of concept experiments. For a more complete theoretical Hamiltonian, an a priori model may be used. It clearly separates the influence of the NV P.A.S. and microwave spatial components. This Hamiltonian is excellent to use in early simulations to observe the limitations on each of the NV P.A.S. and microwave spatial components and to model potential errors. To create the a priori Hamiltonian, first a frame rotation must take place so the Hamiltonian may be created with all the elements in the same frame. There are three important frames to consider when describing the a priori Hamiltonian: the lab frame, the diamond crystal frame, and the NV P.A.S. The “z" axis of NV P.A.S. is defined as the vector between the Nitrogen and the Vacancy within the diamond crystal forms the (P.A.S.). The diamond crystal “z" axis is indicated by the vector perpendicular to the diamonds' crystal frame. The lab frame is defined by the optical “z" axis. For the transformations, we may rotate either the microwave control field or the operators to the NV P.A.S. This frame is chosen as this contains the quantization access for the single NV or for a series of independent NVs in an ensemble. We choose to rotate the operators so the configuration of the microwave control field may be chosen at a later time. (R^T ·C⃗(t))·S⃗=C⃗(t)(R·S⃗) C⃗(t) is the vector describing the direction of the microwave field applied. A field in the xz lab direction would be C⃗(t)={C_x(t),0,C_z(t)} and S⃗ is the vector representing the spin-1 operators S⃗={S_x,S_y,S_z}. A symbolic rotation matrix for factoring in the components of the NV P.A.S into the Hamiltonian may be used to start, the numeric values for the rotation matrix are displayed in figure <ref> and may be subbed into the Hamiltonian when required for calculations. Applying the rotation matrix to the spin operators in their original frame expressed them in the rotated frame (S⃗̃⃗) is shown below: RS⃗̃⃗=( [ R_xx R_xy R_xz; R_yx R_yy R_yz; R_zx R_zy R_zz; ]) · (S̃_̃x̃,S̃_̃ỹ,S̃_̃z̃)^T Expressing the individual spin operators in the rotated frame as: S̃_̃x̃ → (R_xx S_x + R_xy S_y + R_xz S_z) S̃_̃ỹ → (R_yx S_x + R_yy S_y + R_yz S_z) S̃_̃z̃ → (R_zx S_x + R_zy S_y + R_zz S_z) Applying the rotation to the control field in the lab frame, a general control Hamiltonian with generic microwave field direction, in the NV frame is expressed as: H_CTRL = C⃗(⃗t⃗)⃗·S⃗̃⃗ H_CTRL = C_x(t) (R_xx S_x + R_xy S_y + R_xz S_z) + C_y(t) (R_yx S_x + R_yy S_y + R_yz S_z) + C_z(t)(R_zx S_x + R_zy S_y + R_zz S_z) At this point, the control Hamiltonian and NV ground state Hamiltonian are both expressed in the same NV frame. This general representation extends to any single NV. Now that they are both in the same frame, the effective Hamiltonian may be found for a single NV center. §.§.§ Finding the Effective Control Hamiltonian for a Single NV along any Principle Axis Now that the control Hamiltonian has been expressed in the frame of the NV P.A.S., the effective Hamiltonian may be found. The total Hamiltonian is the summation of the ZFS of the NV center and microwave control Hamiltonian. Recall that the rotation terms (xx,xy etc.) are unique for each NV orientation. H_Tot = Δ S_z^2 + C_x(t) (R_xx S_x + R_xy S_y + R_xz S_z) + C_y(t) (R_yx S_x + R_yy S_y + R_yz S_z) + C_z(t)(R_zx S_x + R_zy S_y + R_zz S_z) The following equation is used to find the effective Hamiltonian: H_Eff=U^†(t)(H_Tot-H_Rot)U(t) Where the rotation Hamiltonian (H_Rot) is the P.A.S. of the NV center set at the transmitter frequency (ω_T): H_Rot= ω_T S_z^2 And the matrix exponential of which is: U(t)=e^-iH_Rott=1-(1-e^-iω_Tt)S_z^2 The rotation Hamiltonian and its matrix exponential are substituted into equation <ref>: H_Eff =(1-(1-e^iω_Tt)S_z^2) ((Δ - ω_T)S_z^2 + C_x(t) (R_xxS_x + R_xyS_y + R_xzS_z) + C_y(t) (R_yxS_x + R_yyS_y + R_yzS_z) + C_z(t)(R_zxS_x + R_zyS_y + R_zzS_z)) (1-(1-e^-iω_Tt)S_z^2) Equation <ref> is expanded. To simplify, recall the following spin-1 operator relationships: S_z^2n = S_z^2 S_z^2n+1 = S_z S_z^2S_(x/y)S_z^2 = 0 Some trigonometric identities and Euler's formula may also be used to simplify: (1- e^iω_Tt)(1- e^-iω_Tt) = 2-2cos(ω_Tt) e^i ±ω_T t = cos(ω_T t) ± i sin(ω_T t) Finally, using the commutators and anti-commutator relationships of the spin operators: {S_x/y/z,S_z^2} = S_x/y {(R_xxS_x+ R_xyS_y+ R_xzS_z),S_z^2} = (R_xxS_x+ R_xyS_y+ R_xzS_z) {(R_yxS_x+ R_yyS_y+ R_yzS_z),S_z^2} = (R_yxS_x+ R_yyS_y+ R_yzS_z) {(R_zxS_x+ R_zyS_y+ R_zzS_z),S_z^2} = (R_zxS_x+ R_zyS_y+ R_zzS_z) [(R_xxS_x+ R_xyS_y+ R_xzS_z),S_z^2] = R_xx[S_x,S_z^2]+R_xy[S_y,S_z^2] [(R_yxS_x+ R_yyS_y+ R_yzS_z),S_z^2] = R_yx[S_x,S_z^2]+R_yy[S_y,S_z^2] [(R_zxS_x+ R_zyS_y+ R_zzS_z),S_z^2] = R_zx[S_x,S_z^2]+R_zy[S_y,S_z^2] Further expanding and simplifying using the above relationships, the effective Hamiltonian for a general transmitter frequency (ω_T), microwave field direction (C_x,C_y,C_z) and NV orientation (xx,xy, etc.) is found: H_Eff = (Δ - ω_T) S_z^2 + (R_xz C_x+R_yx C_y+R_zz C_z) S_z + (R_xx C_x + R_yx C_y + R_zx C_z) ( cos(ω_T t)S_x -isin(ω_T t)[S_x,S_z^2] ) + (R_xy C_x + R_yy C_y + R_zy C_z ) (cos(ω_T t)S_y -isin(ω_T t)[S_y,S_z^2] ) The microwave controls (C_x,C_y,C_z) can now be expanded to give more context for the experiments conducted and abilities with each set of controls. From here, the “twisted" operators will be substituted in for the commutation relationship, S_x/y^'=i[S_y/x,S_z^2]. Consider the case where the microwave field is controlled by two independent channels. Two independent channels were chosen as the combination of these will yield circularly polarized microwaves, allowing single selective transitions between the |0⟩→|+1⟩ or |-1⟩ states. These two channels may both emit in the (x,y,z) directions. An example may be two microstrips which each have components in multiple directions. Under the assumption of two independent channels emitting along the (x,y,z) directions, (C_x,C_y,C_z) may be expanded as: C_x = C_1(t)w_x1 + C_2(t)w_x2 C_y = C_1(t)w_y1 + C_2(t)w_y2 C_z = C_1(t)w_z1 + C_2(t)w_z2 The quantities w_x1/2 describes the x-component of the field for channels 1 and 2, respectively. This is similar for w_y1/2 and w_z1/2. These fields are defined by the configuration of the control field, but may be expressed in the general case above. C_1/2(t) represent the time dependent controls shaped by the AWG, IQ mixer and set at the transmitter frequency. In an ideal case, C_1/2(t) are shown in equation <ref>. Here the AWG controls are shown in Cartesian coordinates I_1/2 and Q_1/2, but this may also be represented by polar coordinates with amplitude and phase control; Ω_1/2 and θ_1/2, where I_1/2=Ω_1/2cos(θ_1/2) and Q_1/2=Ω_1/2sin(θ_1/2). C_1/2(t)=I_1/2(t)cos(ω_Tt)+Q_1/2(t)sin(ω_Tt) To further simply, the squares of the cosine and sine functions may be expanded: cos^2(ω_Tt) = 1/2(1+cos(2 ω_Tt)) cos(ω_Tt)sin(ω_Tt) =1/2sin(2 ω_Tt) sin^2(ω_Tt) = 1/2(1-cos(2 ω_Tt)) Expanding the controls for each channel, (C_x,C_y,C_z), into the Hamiltonian in equation <ref>, and using the trigonometric identities listed above yields the generalized effective Hamiltonian. In this instance, the values for I and Q are being analyzed at each discrete time step, so the time-dependency has been dropped. H_eff =(Δ - ω_T) S_z^2 + (cos(ω_T t)(I_1w_A1 + I_2w_A2) + sin(ω_Tt)(Q_1w_A1 + Q_2w_A2))S_z + (1/2(1+cos(2ω_Tt))(I_1w_B1+I_2w_B2) + 1/2sin(2ω_Tt)(Q_1w_B1+Q_2w_B2))S_x - ( 1/2sin(2ω_Tt)(I_1w_B1+I_2w_B2) +1/2(1-cos(2ω_Tt))(Q_1w_B1+Q_2w_B2) )S_y^' + (1/2(1+cos(2ω_Tt))(I_1w_C1+I_2w_C2) + 1/2sin(2ω_Tt)(Q_1w_C1+Q_2w_C2))S_y - ( 1/2sin(2ω_Tt)(I_1w_C1+I_2w_C2) +1/2(1-cos(2ω_Tt))(Q_1w_C1+Q_2w_C2) )S_x^' w_A1/2, w_B1/2 and w_C1/2 represent the microwave field components and NV rotational terms, gathered to reduce the complexity of the Hamiltonian. w_A1/2 = xz w_x1/2 + yz w_y1/2 + zz w_z1/2 w_B1/2 = xx w_x1/2 + yx w_y1/2 + zx w_z1/2 w_C1/2 = xy w_x1/2 + yy w_y1/2 + zy w_z1/2 As the experiments are all performed in the absence of a magnetic field, the center transmitter frequency (ω_T), is set to the ZFS (Δ). All resulting terms then proportional to (2Δ) may be dropped. The terms proportional to S_z also become negligible, as they would induce a Zeeman splitting, proportional to the strength of the microwave controls, but oscillating at 2.87 GHz, and so would be averaged out compared to the other more slowly varying terms. This yields the time-independent effective Hamiltonian for two independently controlled channels for a generalized field in the (x,y,z) directions. The Hamiltonian is presented with letter terms (A,B,C,D) for simplicity. H_eff = A S_x + B S_y^' + C S_y + D S_x^' A = 1/2(I_1w_B1+I_2w_B2) B = - 1/2(Q_1w_B1+Q_2w_B2) C = 1/2(I_1w_C1+I_2w_C2) D = - 1/2(Q_1w_C1+Q_2w_C2) Finally, expanding the values for the microwave components (w_B1) etc. yields the full Hamiltonian. H_eff = A S_x + B S_y^' + C S_y + D S_x^' A = 1/2(R_xx (I_1w_x1 +I_2w_x2)+ R_yx (I_1w_y1 +I_2w_y2) + R_zx (I_1w_z1+I_2 w_z2)) B = - 1/2(R_xx (Q_1w_x1 +Q_2w_x2)+ R_yx (Q_1w_y1 +Q_2w_y2) + R_zx (Q_1w_z1+Q_2 w_z2)) C = 1/2(R_xy (I_1w_x1 +I_2w_x2)+ R_yy (I_1w_y1 +I_2w_y2) + R_zy (I_1w_z1+I_2 w_z2)) D = - 1/2(R_xy (Q_1w_x1 +Q_2w_x2)+ R_yy (Q_1w_y1 +Q_2w_y2) + R_zy (Q_1w_z1+Q_2 w_z2)) Equation <ref> shows the basic structure with the letter format. Each pre-factor (A,B,C,D) is dependent on the AWG envelope of control (I_1/2,Q_1/2), direction and strength of the microwave field at the site of the NVs, (w_x1/2,w_y1/2,w_z1/2) for each independent channel, and last by the NV rotation term, (R_xx,R_xy,R_xz etc.). Although this Hamiltonian describes a single NV, from here it is clear to see the difficulty in controlling ensembles of NV centers as the projection of the effective Hamiltonian is scaled by the orientation of the NV center, and if the volume of NVs is large, the microwave field strength and direction of each also scales over the volume. In the following section, the Hamiltonian will be manipulated, without loss of generality to show how to account for the projection of the Hamiltonian into each NV orientation in a more mathematically convenient way for achieving single transitions. §.§.§ Expressing the Control Hamiltonian with Pseudo Spin-1/2 Operators To ease the calculations in solving for selective ground state transitions in the Hamiltonian, the spin-1 operators may be written as a sum of pseudo spin-1/2 operators. These are of course not true spin-1/2 operators, as the ground state |± 1⟩ states share a space with the same |0⟩ state. Expressing the spin-1 operators in this form, allows for the selective transitions between |0⟩→|+1⟩/|-1⟩ in the absence of a magnetic field, to be found more easily. The pseudo spin-1/2 operators are labelled as S_x^± and S_y^± for the |+1⟩ and |-1⟩ pseudo sub-spaces, respectively. The operators are shown below, recalling S_x/y^'=i[S_y/x,S_z^2]: S_x^+ = 1/√(2)(S_x + i[S_y,S_z^2]) S_y^+ = 1/√(2)(S_y - i[S_x,S_z^2]) S_x^- = 1/√(2)(S_x - i[S_y,S_z^2]) S_y^- = 1/√(2)(S_y + i[S_x,S_z^2]) S_z^+ = 1/2 S_z + 1_3 - 3/2 S_z^2 S_z^- = 1/2 S_z - 1_3 + 3/2 S_z^2 In matrix form, the resemblance to the Pauli operators can be seen, as is the intention. S_x^+ = ( [ 0 0 0; 0 0 1; 0 1 0 ]) ; S_y^+= ( [ 0 0 0; 0 0 -i; 0 i 0 ]) ; S_z^+ = ( [ 0 0 0; 0 1 0; 0 0 -1 ]) S_x^- = ( [ 0 1 0; 1 0 0; 0 0 0 ]) ; S_y^-= ( [ 0 -i 0; i 0 0; 0 0 0 ]) ; S_z^- = ( [ 1 0 0; 0 -1 0; 0 0 0 ]) ; The Hamiltonian in equation <ref> has already been grouped in accordance to the pseudo spin-1/2 operators, so substituting the pseudo spin-1/2 operators, and collecting to isolate for these are trivial. Re-arranging the pseudo spin-1/2 operators in the original spin-1 form, the following is substituted into the Hamiltonian: S_x = 1/√(2)(S_x^+ + S_x^-) S_y = 1/√(2)(S_y^+ + S_y^-) i[S_x,S_z^2] = 1/√(2)(S_y^- - S_y^+) i[S_y,S_z^2] = 1/√(2)(S_x^+ - S_x^-) Now re-arranging for the pseudo spin-1/2 operators, without loss of generality, the Hamiltonian is written with the pseudo spin-1/2 operators. New pre-factors (Ã,B̃,C̃,D̃) are used to distinguish between the pre-factors for the Hamiltonian written in equation <ref>: H =ÃS_x^+ + B̃S_y^+ + C̃S_x^- + D̃S_y^- Ã = 1/2√(2)(I_1w_B1+I_2w_B2-Q_1w_C1-Q_2w_C2) B̃ = 1/2√(2)(I_1w_C1+I_2w_C2+Q_1w_B1+Q_2w_B2) C̃ = 1/2√(2)(I_1w_B1+I_2w_B2+Q_1w_C1+Q_2w_C2) D̃ = 1/2√(2)(I_1w_C1+I_2w_C2-Q_1w_B1-Q_2w_B2) The full Hamiltonian written by expanding the terms w_B1/2 and w_C1/2, is shown below: H = ÃS_x^+ + B̃S_y^+ + C̃S_x^- + D̃S_y^- Ã = 1/2√(2)(R_xx(I_1w_x1+I_2w_x2)+R_yx(I_1w_y1+I_2w_y2)+R_zx(I_1w_z1+I_2w_z2) … -R_xy(Q_1w_x1+Q_2w_x2)-R_yy(Q_1w_y1+Q_2w_y2)-R_zy(Q_1w_z1+Q_2w_z2)) B̃ = 1/2√(2)(R_xy(I_1w_x1+I_2w_x2)+R_yy(I_1w_y1+I_2w_y2)+R_zy(I_1w_z1+I_2w_z2) … +R_xx(Q_1w_x1+Q_2w_x2)+R_yx(Q_1w_y1+Q_2w_y2)+R_zx(Q_1w_z1+Q_2w_z2)) C̃ = 1/2√(2)(R_xx(I_1w_x1+I_2w_x2)+R_yx(I_1w_y1+I_2w_y2)+R_zx(I_1w_z1+I_2w_z2) … +R_xy(Q_1w_x1+Q_2w_x2)+R_yy(Q_1w_y1+Q_2w_y2)+R_zy(Q_1w_z1+Q_2w_z2)) D̃ = 1/2√(2)(R_xy(I_1w_x1+I_2w_x2)+R_yy(I_1w_y1+I_2w_y2)+R_zy(I_1w_z1+I_2w_z2) … -R_xx(Q_1w_x1+Q_2w_x2)-R_yx(Q_1w_y1+Q_2w_y2)-R_zx(Q_1w_z1+Q_2w_z2)) §.§.§ Substituting in the values for the NV P.A.S. The values for the generic rotation matrix shown in equation <ref>, are shown in figure <ref>, generated by performing the frame transformation procedure below. * Normalize the starting vector being rotated (labelled a⃗) * Find the cross product between the starting vector (a⃗) and the desired vector (b⃗) to yield the rotation axis (c⃗) * Find the dot product between a⃗ and b⃗ to yield the rotation angle (θ_(ab)) * Create a rotation matrix, which rotates a⃗ about c⃗ by θ_(ab) to arrive at b⃗ Using this technique, the rotation matrices for the diamond crystal to lab and NV to crystal frame may be found. These matrices are listed below for each NV orientation for the (100), (110) and (111) crystals. These three crystals are shown as they are the common crystal orientations used for NV experiments, and expressed here to show how this can be applied to any single crystal orientation. These values may then be substituted into the control Hamiltonian in equation <ref> and <ref> for a specific NV orientation and crystal. These rotation values may be substituted along with the microwave spatial components corresponding to the field diagram in figure <ref> to arrive at the final a priori Hamiltonian.